Hacker News new | past | comments | ask | show | jobs | submit login
Invisible Manipulation: ways our data is being used against us (privacyinternational.org)
159 points by crunchiebones on Oct 13, 2018 | hide | past | favorite | 78 comments



I get annoyed even by the purported advantages of data collection, like Youtube or Amazon recommendations. Just because I curiously clicked on a stream of data points six weeks ago does not mean I suddenly want to be inundated with more of that random tangent today.

So I hate logical positivism. From logical positivism we get the philosophical foundation of modern tracking and advertising, that every data point obtained about a subject is necessarily meaningful and actionable, and that which cannot be measured does not exist. From there we get the unhealthy Nielsen-style reductionist tendencies in marketing that believes all it needs to know about you is your age, race, gender, sexuality, and income bracket. I think Frankl's discussion on dimensional ontology is the most complete picture I've found of why this is a flawed point of view, though I think any person who has been discriminated against based solely on one of those criteria should be able to understand the limitations of that model.

I hope we can start to understand better as a tech culture that, unfortunately, human beings are decidedly unscientific in the sense that past data collected will not always provide accurate conclusions about future behavior. People grow and change, and collecting data silently in the background and trying to draw actionable conclusions without active user input and feedback is a fundamentally inefficient process, with harms that can be quite difficult to measure, though this article articulates some of them quite well.

I suppose I can imagine that if we lived in a society free from the incentives generating the kind of abuses this article lists, and someone has generated an algorithm that could help you find meaning and success and fulfillment and loving relationships based on all the tracking data you provide, I can imagine this to be the dream of big data for which the sacrifice of privacy might be a necessary one. But given that we still live in a society that prefers exploitation to altruism, what we can salvage of our privacy is still probably the best option until we can reach that next stage of our evolution.


You should check out the Frankfurt school if you haven’t, Horkheimer, Adorno, Marcuse, et al.

They were fierce critics of the very sort of reductive and restrictive societal tendencies you’re arguing against here. Positivism is indeed poisonous, a beneficial poison in some ways, but poison nonetheless. Unfortunately, while some things have improved, a lot of the problems such thinkers diagnosed and fought against in the 40s-60s have gotten more severe—the technical edifice determines the lives of the human beings more than ever, and the possibility of alternatives that would increase freedom and not reduce human experience to subtle manipulation and organization by algorithmic and prescribed modes of behavior seems more and more distant.


When I studied economics in the 1980s here in the UK I was dismayed to discover how much positivism implicitly dominated the economics curriculum. At A-level it was even more blatant with Lipsey's "Positive Economics" serving as a textbook for the whole course. Only the study of sociology enabled me to put this perspective in perspective. Lipsey's work presented the theories of supply and demand and Adam Smith's "invisible hand" as some kind of law of nature, the kind of gold sought by Comte and Durkheim. It was only the study of Marx's dialectical materialism which really opened my eyes about how positivism as taught in the economics curriculum was used to justify the capitalist system.


Thanks, I appreciate the recommendation.


>So I hate logical positivism. From logical positivism we get the philosophical foundation of modern tracking and advertising, that every data point obtained about a subject is necessarily meaningful and actionable, and that which cannot be measured does not exist.

That's a straw-man version of logical positivism. If you're able to talk about a failure of the measurement system, you've just measured the failure (by your senses, presumably). So, the positivists would say that it existed.


This is the probably the most erudite take I've seen on HN on the data collection issue, especially the comment on being over-reductionist. On one hand: not hard to do. On the other: I'm grateful that you did it. How can I get in touch, I'd love to talk to you more about this stuff.


I added an email in my profile if there's something you wanted to discuss further.


You realize that you're just complaining "they" are not using your data well enough. You certainly can't realistically sort through youtube yourself, nor can you realistically find what you need on Amazon.

Fact is you cannot use these sites realistically without these algorithms. It just can't be done. Or at least one can say, you can use Amazon and Youtube better when guided by these algorithms. I do think that it is the difference between not being able to use them at all and using them, but maybe you think it's just a bit less than that.

(also logical positivism and inferring things through statistics are pretty much diametrically opposite things. No logical positivist is ever going to accept statistical evidence about anything, but no matter)

> I suppose I can imagine that if we lived in a society free from exploitation, and someone has generated an algorithm that could help you find meaning and success and fulfillment based on all the tracking data.

That's exactly what these algorithms do. They try to find what YOU would find meaningful and fulfilling and they really do try to give it to you. When that means money will be exchanged they prefer one vendor (that has paid them) over another (which hasn't, or paid less), but does that really change anything ?


No, actually, I as an adult human being, am quite capable of using search, to find what I want.

Now, I agree, search is also governed by algorithms - but I'm also quite capable of sitting there for a few minutes, paging through until I get past them to find what I want. I do not want anything controlling my filters, except me. Edge cases can be dealt with. Latex lovers need to learn that it's a text editor first.

As for Amazon, I have never found their recommendations helpful, or bought anything via them. If Amazon wants to help me, it really needs to clean its house on fake recommendations, and bogus products.


A simple google search for, say, "string" confirms to me that what you say is not true. I program, so it shows me ... well not clothing. Things I would not want to sort through.

So you should realize that even something as simple as search cannot work well without "using my information". And I very much want it to do that.


If that were true, people would not be able to find anything useful on DuckDuckGo.com and similar sites. Tracking is definitely not a prerequisite for quality search.


https://duckduckgo.com/?q=string&t=h_&ia=web

https://www.google.com/search?q=string

Judge for yourself whether context matters. I suppose the results will differ from person to person, but I feel comfortable saying that while Duckduckgo isn't a lost cause, it contains a lot of bullshit to filter through. Example:

> STRING: functional protein association networks > STRING is part of the ELIXIR infrastructure: it is one of > ELIXIR's Core Data Resources. Learn more > ...

I mean ... really ? I suppose it must be a string, but, and maybe this is just me, I'm not looking for genetic manipulation tips or databases when I search for "string".


What I find funny, is that you think some crappy algorithm, can beat the real neural network, your mind ;)

You convert a fixed problem ('locate search terms that have a high correlation to what I am searching for') to a fuzzy problem of "how can I guess what this person is searching for, based on a sequence of search terms, where I manipulate the results, thereby gauranteeing that the search sequence is poisoned by what I presume they are searching for').

Oddly enough, the latter, is how you get high dwell on a search engine (COUGH COUGH).

as soon as I started using duckduckgo - bang, back to 1 minute searches as I just altered my search terms and actually got back, what I wanted.


Actually, when you link up 2 neural networks, or even a neural network and, say, a search algorithm, the neural network rapidly learns to use the external algorithm to do what it was planning to do anyway.

So there is no "beating" a neural network. The reward for having such a search algorithm is simply the ability to put thoughts into your mind. Your mind may still reject them, and if you don't feed it enough relevant data, it will rapidly abandon you.


presuming you don't just abandon the search is irrelevant. :)

which is why I don't stress that "agree to disagree over similar premises". can be both a hindrance, and a benefit.


Is it called "dwell"? Because it's a useful word for an incredibly annoying trend that Google were doing the last few months I used it: whenever I searched for a word I'd used in a previous search they would try really hard to get me the same results as before, including ignoring quoted words and minus-prefixed strings. At that point I just really wanted Google to forget my previous searches, but of course that's impossible. Since I'm interested in more than one thing, using a different search engine is just better.

On a related note, Google seems to be doing something similar with startpage.com searches, I guess because mostly geeks use it. Or else they are somehow getting enough data through startpage.com to identify me.


dwell is a technical term from production lines ;) it means you adjust the speed at which your conveyer runs, to offer more time under a particular process (implemented along the line). typically heating.

Google, like Amazon, and Apple, have jumped the shark.

Facebook, is the shark. ;)


string c++

The human mind is far greater than the sum of the algorithms - when we allow it to be.


I do remember that from the 90s. There were whole books with google search tips. The extra syntax that you needed to know.

The meaning of quoting search terms (all 3 different ones). The + and - things: +term -term. The special syntaxes "site:", the use of asterisks within quoted search terms, "define:", "link:", "related:", "...", "nearby", "filetype:", ...

And then the many "soft" tips, like leaving out unimportant words and so on.

I mean it was ... euhm ... well I remember it was all kinda cool (and it still works !), but ... today it is better.


Well, I'm not trying to make the case that recommendation algorithms are strictly unhelpful. It would be absurd to imply that there aren't many cases where they have provided a benefit to the end user. I just think that in many cases in my experience they have a bad signal to noise ratio to the point where it annoys me, because of the limitations of surreptitious data collection. Also, Youtube and Amazon don't allow you to turn recommendations off, which to me implies that in a choice between user preference and their business interests, it is clear which one is more important to them.

> That's exactly what these algorithms do. They try to find what YOU would find meaningful and fulfilling and they really do try to give it to you. When that means money will be exchanged they prefer one vendor (that has paid them) over another (which hasn't, or paid less), but does that really change anything ?

To me, this question is asking, "Is there a difference between a data collection algorithm whose primary goal is to help me, and a data collection algorithm whose primary goal is to profit off of me?" While there does exist a subset of transactions where the results would be the same, this whole article upon which we are commenting provides ten case studies of unintended harms or instances where the interests of the data collector and the interests of data collectee are not aligned. If you are looking for a more in-depth discussion illustrating some of the potential harms, I found Robert McChesney's Digital Disconnect to be a convincing first look at the problem, and there are some good documentaries on Thoughtmaybe that may be relevant as well, including Advertising at the Edge of the Apocalypse, Weapons of Mass Surveillance, and Adam Curtis's The Century of the Self.


> To me, this question is asking, "Is there a difference between a data collection algorithm whose primary goal is to help me, and a data collection algorithm whose primary goal is to profit off of me?"

Unfortunately you are describing a choice between A and B. And B does not exist. It would not make a profit, and therefore nothing short of the government providing better search can make it happen, paid for by taxes (Google's operating budget is ~$80 billion dollars, which would have to come out of your taxes). We both know we don't want that to happen, so ...

Given that you're describing a choice between A (for profit recommendations) and nothing, there is but one choice.

Well, there's 2: data collection algorithms that profit "off of you" or no data collection algorithms at all. No context sensitive search, regular lingerie of the wrong gender on your amazon frontpage, baby diapers on the front page of walmart.com when you've long since retired, ...


Somehow people used libraries just fine without having a personal profile based on regressions governing the card catalog. I really can't empathize with this sense that shopping and watching video channels 'can't be done' without recommendation algos.


You say that as if, economically speaking, it is a realistic option to go back to the past. It isn't.

In theory we could go back to heating with open fires. Steam engines. And so forth and so on. We can't. It wouldn't work, and with steam engines it's blatantly obvious why, and I get that with search engines it's not that obvious, but it's the same reason.


Actually there's some pretty cool stuff going on in steam engine research, if you look up "Coalition for Sustainable Rail", but that's neither here nor there.

I'm still not sure what part of filtering search results explicitly instead of statistically/nondeterministically is so backwards to you. Really, what's your angle in insisting on this particular future, where everything is sorted based on personal profiles?


That may be true, but I think you'll find that's further forward movement, not backward movement. I'm sure that aside from the operating principle, there are going to be huge differences between new and old steam engines.

One thing I know about, for example, is nuclear power generators. They are "steam engines", sort-of. That requires a creative interpretation of how it works, as it doesn't match at all.


> Fact is you cannot use these sites realistically without these algorithms.

Really? I have my web browsers set up to always clear all cookies (no exceptions) when I close them (and I don't keep them open long). I also use various browser plugins to get rid of other means of tracking as far as possible.

And I don't even have a YouTube account to log in to. Still, I use the site often and generally find what I'm looking for.

Similar on Amazon, where I have an account, but I only login when I have put things into the cart and actually want to checkout. Until then, again, Amazon knows nothing about me. But I find the things I look for. Oh, and despite being an Amazon customer for 15 years or so, with plenty of buying-history for them to analyze... their recommendations are practically always uninteresting. So I don't think I'm losing out on anything...


>... their recommendations are practically always uninteresting.

I've had Amazon recommend books that I've previously purchased from them.


> Fact is you cannot use these sites realistically without these algorithms.

I don't rely on recommendation systems or past activity. I search specifically for what I want and find it without difficultly. I don't need to be told what I want. I don't see the value in being babysat or shepherded by an algorithm.

To quote Alexis de Tocqueville from his book Democracy in America (a bit out of context):

"What good does it do me, after all, if an ever-watchful authority keeps an eye out to ensure that my pleasures will be tranquil and races ahead of me to ward off all danger, sparing me the need even to think about such things, if that authority, even as it removes the smallest thorns from my path, is also absolute master of my liberty and my life; if it monopolizes vitality and existence to such a degree that when it languishes, everything around it must also languish; when it sleeps, everything must also sleep; and when it dies, everything must also perish?"

He was talking about the government. But in a world of massive and pervasive corporate surveillance and in a world of advertising more finely tuned to manipulation, I think we can also apply the quote to the likes of Google and Amazon.


> He was talking about the government

Exactly ... and you're proposing to use the government (expanding it's survaillance, btw) to hold back these private corporate surveillances ?

Many of de Tocqueville's objections don't really apply to this situation by the way:

1) google isn't pervasive, nor is facebook or amazon. In reality there's many thousands of search providers. Most are far worse than Google, Amazon, Facebook in the tracking department, and some are more-or-less government departments

2) "when it languishes, everything around must languish, ..." this is a complaint about lack of competition, and this online space isn't lacking for competition at all.

And lastly, I do wonder what you think about the alternative I think you're proposing. Having the government provide and regulate these services. I might argue that that would be far worse in the surveillance department, but we could also just observe that there are 2 countries that actually do this. Russia with Yandex, and China with Baidu. Both are more-or-less government departments. Both governments' security departments feel the need to have people's search terms associated with their names available for the secret service and the police is a necessity. It is not hard to find online reports of western security services asking for and getting access to entire browsing histories [1], so realistically under a government search engine they'd get search term surveillance. Do you think people using these search engines that just report everything to the government are better off, surveillance-wise ... or worse ? Because that's what would happen under your proposed alternative.

It's sad, but realistically the only alternative available under the current governments, in the US, or in Europe, or anywhere else for that matter with one or two exceptions, is government surveillance instead of splintered private surveillance. I think I'll take the private option, thanks.

[1] It's browsing history, not search history, but ...

https://www.theguardian.com/uk-news/2015/oct/30/police-seek-...

https://www.independent.co.uk/life-style/gadgets-and-tech/ne...

https://www.theguardian.com/commentisfree/2013/aug/01/govern...


and you're proposing to use the government (expanding it's survaillance, btw) to hold back these private corporate surveillances ?

No. Re-read what I wrote.


So what exactly is your enforcement mechanism for stopping this surveillance ?


I have my browser setup to block the recommended videos column on Youtube and it's a lot more enjoyable/useful to me. I'm not really sure what you're trying to say.


> You realize that you're just complaining "they" are not using your data well enough. You certainly can't realistically sort through youtube yourself, nor can you realistically find what you need on Amazon.

This implies that I need to be nannied by Google or Amazon in order to use their websites.

Somehow, I doubt that.

> Fact is you cannot use these sites realistically without these algorithms. It just can't be done.

There was a point, not very long ago, where these websites had no, or naive, recommendation systems. Search functionality was based on strict to fuzzy keyword matching.

Both YouTube and Amazon were infinitely less frustrating to use back then.


That’s because back then their results weren’t optimized for engagement.


> You realize that you're just complaining "they" are not using your data well enough. You certainly can't realistically sort through youtube yourself, nor can you realistically find what you need on Amazon.

> Fact is you cannot use these sites realistically without these algorithms. It just can't be done. Or at least one can say, you can use Amazon and Youtube better when guided by these algorithms. I do think that it is the difference between not being able to use them at all and using them, but maybe you think it's just a bit less than that.

> (also logical positivism and inferring things through statistics are pretty much diametrically opposite things. No logical positivist is ever going to accept statistical evidence about anything, but no matter)

> > I suppose I can imagine that if we lived in a society free from exploitation, and someone has generated an algorithm that could help you find meaning and success and fulfillment based on all the tracking data.

> That's exactly what these algorithms do. They try to find what YOU would find meaningful and fulfilling and they really do try to give it to you. When that means money will be exchanged they prefer one vendor (that has paid them) over another (which hasn't, or paid less), but does that really change anything ?

-amazon is only unusable without their algorithms because their search and the product data (indexes, categories, refinable factors) backing it is utter crap. Instead of building Alexa -amazon should have made their search adequate for the previous decades standards.


Or at least one can say, you can use Amazon and Youtube better when guided by these algorithms

In the case of Amazon when I search for "headphones" there's plenty of data in my history on there that could aid a product recommendation - they know what music I like and what devices I use to listen to it via Amazon Music, they know what mobile phone I have from the Amazon app, they know that I like the Anker brand from what I've bought recently, they know approximately how much I spend on gadgets, they know I only buy things that I can get Prime delivery on and they even know which headphones I bought last time. And yet the search listing doesn't have the things I'd like most at the top. Or even in the top ten.

If they're trying to make the search better for me then they have failed completely.


"...you cannot use these sites realistically without these algorithms..."

We're past peak recommenders. There's very little juice left to squeeze from these turnips.

Emphasis is now on better search. Improve the tagging, use some ML to better categorize stuff, leverage personalization to boost & filter results, and misc secret sauce.

Source: Work on recommenders for a retailer.


> Fact is you cannot use these sites realistically without these algorithms. It just can't be done.

You never used your browser's incognito mode?


>You certainly can't realistically sort through youtube yourself, nor can you realistically find what you need on Amazon.

People will find what they need, period. The algorithms are tuned to show you the items you’re most likely to buy/watch on top of what you actually came for.


>Soon the default will shift from us interacting directly with our devices to interacting with devices we have no control over and no knowledge that we are generating data.

That's already the case. The big problem is that no one cares about it, because it seems like everything just work fine. I don't know and I can't tell if bad things will happen to us with our data everywhere, but it's possible. But how could we possibly convince "normal" people that giving your data out for free is bad?


I’ve tried and failed more often than not. Most of the time the very subject matter is so “boring” that I never get very far. I’m not sure what to do either, most people don’t see the problem and find the topic too obscure to inform themselves.


It takes a special mind to delve into such topics. This will and has been the case.

Those that understands it becomes responsible to inform other of which settings they should have. While not needing to explaining why. That should apply to most (scientists, networking, philosophy).

I think the above mindset at least helps me out to really dumb it down/skip to the real effects when speaking to others. Well, I try ;) But there's always a shimmer of hope that someone will pick something up and go deeper, but that does not happen regularly.

So what to do? Start an online group with people with the same focus and do the battle. Nobody else will care until it's changed and it starts to affect their daily life.


The issue with that is that it's not "your" data. Not because you owe it to the internet companies, or because you give it to the government, which then uses it both for commercial purposes and directly against you (pervasive in China).

Your data is not yours because you aren't unique enough to have unique data. So there are tens of thousands of humans alive that are essentially direct copies of you, at least in terms of behavior. They might have a different address, but they can be influenced in exactly the same way.

So this is not a battle that can be won.


What you say only applies to DNA.

Other types of data sets might not be globally unique, but that's also an irrelevant point if one can't be bucketed to one or more sets. And without pervasive, unregulated tracking, which is the very point of this article, it can't.

You're also too quick to declare winners. This pervasive surveillance is pretty new, society and government move slowly and it seems that our societies are starting to better understand how this new form of exploitation works. TFA is actually a cristal clear expositions of the clear disadvantages of data collection.

We still get people on HN crying "how does this affect me?" as a pro surveillance argument and the article puts that question to rest.


Depends on what you mean by won. Is your data used to make you a better consumer or prosumer?

Is data collected without your knowledge actually useful? Twitter data jumps to mind, a lot of junk.


Meaning that keeping data away from people who would use it for their own purposes, from selling you something to impounding your car if you're behind on payments cannot realistically be accompolished.

For each use, judge for yourself if that's good or bad, effective or ineffective, but the purpose doesn't change whether the data is available or not.


Bad data is bad data. It's true it gets averaged out due to the masses. Still it does provide a somewhat accurate picture, but has a degree of error.

Best means I can argue this are colleagues in plant life and marine biology using the same tools are still attempting to get concrete predictive data for the environment. It's simply not there yet.


Nope. What I'm saying is that YOU're average (and me, and everyone). Which means that given a few facts about you, the rest can be correctly inferred. Which means that you cannot prevent systems from inferring how you can, say, be induced to buy something, because you cannot keep those few trivial facts it needs for that from the system.


I keep getting ads for things I do not want to buy due to the average. For example a championship belt that recently has a tacked on women in my feed. By no means is that something that interests me but can understand the average may be tempted.

In regards to the younger generations I definitely understand the fear. But we get bored as a species, otherwise networks would never of gone down the outrage path of targeted ads had an absolute effect. And even then more and more people are setting aside their smart phones and jumping off of social media or just using it for the bare minimum.

What you are arguing is happening, but whether or not we are truly on the last leg of the data argument is not true. https://en.m.wikipedia.org/wiki/Carpenter_v._United_States


That just means the algorithms don't work well enough or don't have enough data yet. Also there's a little bit of bad incentives here: these companies also get paid for showing you something at all.


Because everything is fine. The big conceit from privacy advocates is they think people don't know their data is being collected and used, but the reality is, they do know. They just don't care.

Show me a concrete negative impact on my life as a result of the passive data collection perpetrated by Google, Facebook, et. al., and you might change my mind, depending on how negative it is.

I get a lot out of the products I pay for with my data. It feels like a bargain, honestly.


You believe this is all about you, when in fact it's about our societies and vulnerable categories of people (mostly the poor, and various religious, political, ethnic, etc minorities).

You're basically declaring at first they came for everybody else than me, but that's okay because I'm still okay!

If the effects of pervasive surveillance are a negative to society it doesn't matter that you're benefitting - your "bargain" will have to cease existing.


My "bargain" is my choice, and the choice of anyone who wants to make it, and no one gets to rip that away from us because they're afraid of some boogeyman who will never come.

The real conceit comes from people who think the rest of us don't know what we're doing; we do, and we don't care because it doesn't matter. Those people have no special information, and they're not smarter than us.

There is no boogeyman, there are no "dire consequences", and the people peddling that nonsense are tilting at windmills.


No one will come to you and forbid you from using Google or Facebook.

If everything goes right, government and society will instead pressure those companies to offer honest alternatives to their existing spying-based services and make it cristal clear how one pays.

The masochist, careless and indifferent will likely still be able to use the services they know and love, but their lack of interest will no longer be paid for by everyone.

Smoking was also pretty cool at one point...


"Spying-based services" is some truly disingenuous labeling.


As we dont know what the future will bring ethnic clensing is a realistic cenario.

It wont happen or it wont happen to me lacks optimism about how long the future lasts.


I'm willing to experience a higher quality of day-to-day life while not being prepared for an "ethnic cleansing", so it is not part of my threat model.


If a company uses the tracking data to personalize prizes of stuff that you are interested in, you suddenly might pay much more than you think without ever being aware of it.


I doubt that a) companies would do that in general and b) the few that do are charging low enough of a premium that they're still competitive.


You doubt that companies would do this? Amazon already does it. Airline companies use location heuristics.


> Amazon already does it.

No they don't.


How is calculation of health insurance premiums not #1 on this list ?

Recently I purchased a DIY mole removal product from Amazon. An expansive actuary table would show a "concern about moles" has a slight positive correlation with "skin cancer". Arguably my premium should go up because of this.

It also blows my mind that Amazon won't allow you to delete a purchase from your Amazon order history. It is there forever and there is no reason that data can't be sold even if the account is closed. To be fair I am making assumptions without having read the privacy policy in Amazon's EULA.


I agree that this is a glaring omission. Car insurance companies already have monitoring devices you can plug into your ODBII connector that monitors all your driving behavior. OnStar also has the ability to report driving behaviors to insurance companies. Or how about companies that claim your credit worthiness can be judged by your Facebook friends and posts. How long until these extremely invasive practices become so profitable that companies make it required? I worked for a company whose Heath insurance offered $50 to employees that got a heath screening through them. The next year it was no longer a discount, but a fee if you did not participate.


Insurance makes sense for random events, especially not under the insured’s control. Ignoring healthcare, since it’s not even an insurancable risk, people who drive safely are subsidizing people who drive in more risky ways, but only because in the past, the technology wasn’t there (or cheap enough) to provide all of the data to accurately price them. Now that it is possible, I don’t see why people who take more risks on the road should be subsidized by those who don’t.


My comment is on the loss of privacy. I am not an aggressive driver and I suspect that I might get a discount if I joined one of these programs, but I value my privacy too much for that. I do not want my every move monitored by my insurance. OnStar would even have the ability to report GPS location. How much longer then until the locations you visit are also factored in- or perhaps that info is used to ‘enrich’ other insurance types. What if your car insurance shared with your health insurance that you visit fast food restaurants twice a week? That’s hypothetical right now, but my point is that the data is so valuable that companies will become more and more invasive to get it- especially insurance. At some point these optional privacy violating practices will become required unless we have legislation protecting us.


Just look at the asymmetry. These organizations push to collect as much data as possible on every aspect of our lives.

But when it comes to information on what they are doing -- and the truth of their internal world? No, that's proprietary, a trade secret, etc., etc., and they slap non-disclosure and non-disparagement strictures on everything and everyone they can.

In other words, look at their actions, not their words. That should tell you something about the value of all this data. And about their self-serving hypocrisy.


I wish this article had actual substance and not just a listicle of off-the-cuff one-liners.


You can click into the links to see the full case-study length explanation.


Most of the very, very short content is true. Being one-liners lessens it's credibility considerably, in my opinion. It's almost like just writing "chemtrails" on a wall.


It does have links to articles for more details (click on title)


Instead of trying to prevent data collection, what if we made simple tools that let anyone make their data less useful: data pollution or telemetry vandalism if you will. Either generate a bunch of noise or try exploiting analysis such that a machine would not be able to tell real from fake.

Simple example: instead of an ad blocker, everyone runs a script that “clicks” on every single ad they see



I suspect they wouldn't notice the difference. There are already plenty of click-farming frauds for advertisements one. The targeting seems more like it is a conceit that individual tracking is more useful than responding to what they are currently looking for. I personally find aggregate based data far more useful with suggestions than anything personalized. Like if I am searching for a novel it links to similar ones liked by other people.

If someone is searching for cat toys on Amazon they already know what they want and you can target relevant ads to them.


> Cities around the world are deploying collecting increasing amounts of data and the public is not part of deciding if and how such systems are deployed.

Really now? Aren't city officials elected in North America and Europe - and most other places nowadays?

Go vote if you mind it.


A short video would reach the non-choir audience better.


they've already done that here: https://privacyinternational.org/campaigns/uncovering-hidden... as part of their campaign


why did '10' just disappear from the title? pretty sure I didn't edit it by accident ._.


HN has an explicit rule against "$N reasons why..." headlines, AKA listicles.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: