The relation between the user and a service is now completely asymmetrical: it is hard to know what your data is used for. It does not help that the legalese often boils down to 'you will sell your soul'.
As abused as they are, internet users need to build up some healthy "buyer beware" instincts around the tradeoffs. They'll tell Facebook things they'd never physically say in front of strangers due to the bait-and-switch feeling of talking to friends, forgetting the panopticon around them.
I think part of this is the mystery surrounding data practices - people fall for come-ons they wouldn't accept if they understood what was actually happening. So more and louder talk about things like unroll.me is good - if people hear more about others feeling burned by the bait-and-switch, they'll hopefully be more careful, because they see the results of accepting that anodyne "may share with trusted partners" language.
It's not the first time I've asked people who seemed convinced that I was in some sort of abusive Stockholm-syndrome relationship with Google/FB/etc to explain what exactly is so bad about the metadata packages they're selling. The only convincing arguments I've ever heard have centered around insurance companies denying me coverage based on risk factors I'm revealing, but I'm pretty sure nobody is actually doing that. Everything else basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.
Imagine your opponent had access to every single search query, email and message you had ever sent.
Imagine they had access to all of the GPS data from your smartphone so that they could tell exactly where you went each day, how long you spent there and (importantly) which other people were nearby.
Correlate all that with all of the things you've ever bought because your credit card company sold that info years ago.
Add in all the data from your spouse, your children and your closes family and friends.
Now hand it to the people that are trying to smear your reputation in the worst possible way.
You can see how easy it would be to sway elections this way. Even if right now it wasn't something you cared much about - this data is collected forever and I doubt you know exactly how your life is likely to be ten, twenty, thirty or more years from now.
The real nightmare is when insurance companies have access to your search history.
"We see that you googled X and Y on date D which was before your policy started, so your problem Z is probably a pre-existing condition, claim denied"
I am not really sure if targeted ads, targeted news, targeted posts, targeted everything in social media is a good thing for a society in the long run. The physical reality is still the same for us; it would be great if we all lived in the same mental reality, too.
On a personal level, while some of the advertising is good (it's useful knowledge to know which online bookstores ship to my country and what are their prices), I've always found the idea of advertising bit scary. A certain kind of optimized stimuli will make me some percentage point more like likely to want to buy a some kind thing I did not want before? Brr. I want more control of what I want.
And some tradeoffs are not about the data. I've also noticed that an infinitely long feed that you can scroll and scroll ... is slightly addictive. I've read that it is so on purpose: all social media platforms popular today make money by advertising, or in other words, by having their users spend their time procrastinating on Twitter/FB/etc (so that they see the ads). I don't think this is a net benefit to individual users or the society as a whole, either.
The arguments on HN and other places for "online privacy" aren't usually hinging on this tragedy-of-the-commons point. They're usually heavily implying that some very real and personal harm is being perpetrated on the unsuspecting sheeple right now and if only they understood what was really happening they'd be up in arms. I don't think a vague uneasiness with consumerism in general is what they're alluding to.
Have you even bothered to read EULAs and privacy agreements?
> The only convincing arguments I've ever heard have centered around insurance companies denying me coverage based on risk factors I'm revealing, but I'm pretty sure nobody is actually doing that.
Think of it this way: suppose you're working on a project. A competitor (or, really, anyone who doesn't like you) rolls in and wants to discredit your stuff. All they have to do is mine your "private" data for anything that can be put into a bad light. Then, because your clients are "private", they then take that info to your clients and ask whether they want to continue doing business dealings with such disreputable people.
Or worse. They make it public.
Yeah that's not going to happen. I'm pretty sure nobody is actually doing that. It's not like that's exactly what's being played out in high U.S. politics right now...
Podesta/Pizzagate, Trump/Russia, Clinton/pay-to-play, yeah there's definitely nothing gained from invasions of privacy.
Don't worry. You're just the little guy. You won't be affected. Not until it's your boss that decides he needs a scapegoat.
- Your first link is about the unroll.me debacle. Perfect example to me of a completely overblown story. They used receipt data to produce an anonymized sum (probably broken down by things like region, etc) of total ride-sharing spend, which they sold to Uber. I guess that kinda sucks for Lyft if they weren't expecting Uber to have such an easy time getting at that number, but it's hardly destructive. In most other industries, there are dozens of well-respected, very rich firms who's whole job it is to use similar tricks to estimate market share, Nielsen being the best example. Is Uber getting Lyft's numbers this way really that different from Wal-Mart getting Target's numbers by getting Nielsen to pay shoppers to scan their Target purchases and give that data to them? (That's called the Nielsen HomeScan Program, by the way). I understand that HomeScan users are volunteering for this in a more explicit way than unroll.me users are, but again, this is a philosophical distinction. Absolutely 0 harm was done to me as an unroll.me user by them giving Uber this number.
- Your second link is the one good argument that I already acknowledged. The insurance industry is 100% built around developing a good statistical model of risk so they can price premiums appropriately for that risk. Nobody bats an eye that insurance rates are different for women vs men, which is after all a genetic difference like the ones in the article. The only difference is that it's an obvious one. It's a complicated question and I can see why this is a good question for courts to consider, but it's not a foregone conclusion to me that insurance companies shouldn't be able to factor these things into their models. Either way, 99.99% of the things people on HN (or in this thread) complain about with regards to privacy has nothing to do with your genome.
- Your other mess of points are all movie-plot conspiracies that are already illegal in other ways. This doesn't convince me at all. I'm not seriously weighing the risk of some international political scandal being played out with me as the scapegoat as a result of Facebook knowing what movies I like. Neither will the vast majority of people. This does not make them irrational.
My brother does digital advertising for a living, whereas I have an opaque sticker on my front webcam. We come from opposite ends of the privacy spectrum, and talk about this issue every so often. And yet, I've never succeeded in convincing him that lack of privacy online will impact him personally. The reason for that is that it probably won't. He's savvy enough, like most people, to keep his accounts private to random Google searches of him. But he uses FB, snapchat, instagram, and all of them heavily. Ultimately, I'm forced to admit that being in touch with friends on FB adds value to his life. The fact that I personally don't care for that doesn't mean he doesn't or shouldn't, and for him, the trade-offs are completely acceptable.
As for personal harms, I readily admit that most of my concern is about risk. It is more speculative - as far as I know, there's not been a huge number of targeted attacks on random people yet, and for those that do happen, people focus on the specific details, rather than the data that enabled it. You can write off the attacks on political figures, because you're not that important. But that ignores all the Gmail cracks that compromise other services because enough data about them is floating around to make an attack possible.
But 10 years ago, perople weren't worried about automated attacks on US tax returns, internet-originated bank heists and ATM harvester stories were novelties instead of near-daily occurrences, and so on. What data did I leak somewhere that will come back to bite me next decade?
I believe my privacy is very important to me, and want the ability to protect it.
The fact remains, even if you don't care, you should still fight for the ability of others (like me) to do so. That is what liberty is.
If you don't, we may all lose that ability. I don't want that to happen, and there are many others who don't either.
History shows how most dictatorships worked very hard at accessing citizen informations while keeping secrecy about their actions. Democracies are based on the opposite principle where public bodies are transparent and citizens are attentive observers.
The same power struggle exist between people and corporations.
Seems like a win for them and a mixed blessing for me. Is there a chance that I'll learn of some new product or service that will improve my life enough to be worth the cost of privacy+cash? Yes. Does it make me more likely to buy something that won't be worth the cost? Yes.
I'd rather reduce my costs by giving them less information and have a higher proportion of clearly-irrelevant ads that take less effort to evaluate and decide to ignore.
targeted ads are not better ads. they can be more effective ads. for the advertiser. not for the consumer of the content they are mixed with because they are harder to ignore or even distinguish.
it's not in the interest of the consumer to be told what they want, and being told more effectively definitely isn't.
also, on a personal level targeted ads creep me the hell out, piss me off way more than bullshit ad noise. I get a very different level of exposure in the EU, though, so the level of creepiness isn't quite as normalised.
also with ads being as ubiquitous as they are, it's weird and alienating to make our ambient experiences not shared ones. it's like that vague unease using someone else's logged in youtube, the alienness of the suggestions, or reading along with someone else's Facebook feed and worrying about the bullshit they're peddled, which for once sticks out because it's not targeted at you. but they didn't ask for that, they were just told what to want.
surely you don't think that the ads targeted at you are different? (even if you do, that kinda undermines the point they are generally win/win)
How do you know you'll get better ads? Maybe the algorithm decides you have the money to pay and you will be presented with overpriced products. I for sure fell victim to the advertising before and bought products that I payed to much for and made impulse purchases that I regretted ordering but still couldn't bother to send those back.
This shouldn't be on the users. The disparity in knowledge between the people running the services and the people using them is huge. The reason a lot of laws (in general) exist is to protect the vulnerable from harm, including harm they don't have the capacity to understand. I think that's an important facet of this debate. It's not just 'free market/free choice' etc. The harm involved in giving up your privacy isn't fully understood by many people so it's up to the law to protect them.
I absolutely agree. But unless and until something changes, that's how it is.
That was something Joseph Weizenbaum was surprised by with ELIZA. He expected that people would regard it as trivial, but many people chose to share their deepest feelings with the program and felt an emotional connection to it. It might have been helpful in this regard that they weren't talking directly to another person.
Not having access to what its done with your personal information is one of the things that prevent such an instict from being developed.
Outlaw all the spying. Let non-tracking advertising, community efforts, and regular ol' paying for stuff fill the gap. We'll lose little but regain our privacy, which is gone unless we all do this (see: shadow accounts tracking people because their friends posted about them, Google mining your messages because you sent them to someone who uses Gmail)
It's a good solution for more than one reason. It also helps you avoid vendor lock-in, prevents from getting stuck with services that get gradually dumbed down in their pursuit of growth at the cost of actual usefulness, and most imporantly, shields you from losing something you depend on because the startup running the service finally got their exit.
It really depends on what sort of tracking you're most concerned about though. If you're concerned about facebook or similar having your pictures and messages then a solution like this is a good option. Yes, your provider knows it's you, but unless they've invested serious effort backdooring your hardware, they can't tell what you send over SSL'd connections, they can't tell what people visiting that page are seeing and by and large, and most importantly, they don't care, they see no profit from that.
Exactly. It should be well-known that "If you are not paying for the product, you are the product.".
Differential privacy also has the nice property that it works like a fungible currency: it is additive, so even if you hand out x, y, and z units to different folks, they collectively can't spend more than x + y + z units, no matter how clever they are. In principle this could help folks determine a value for their sensitive data, and perhaps set up decent standards for people to value their own privacy (e.g. "selling 0.1 units of differential privacy from your email could shift your health insurance premiums by 10%").
That said, how much of a culture shift does there need to be on the consumer side? That is, even if privacy policies etc. are boiled down to 3-5 bullet points, how many people will continue to wave them away, and how do we change that?
Let's try to imagine how this would work with the example in the article, unroll.me, and the information they sold to Uber. They couldn't have told the people they were going to sell their data to Uber; that deal was certainly made after much of the data was gathered. Even if you restricted it to the future (so you have to know your partners ahead of time, which is practically impossible anyway, but lets go with it) How would you explain that in a non-legalese way, that really captures what you are doing?
I am not sure there is a human language equivalent of what the legalese says. The company won't even know in advance what ingenious uses of your data they'll be able to cook up.
> the legalese often boils down to 'you will sell your soul'.
Okay, maybe that works.
If someone convinces a friend or family member of the lie that they can trade privacy for services, then my communications with them are compromised without my consent.
This is all about every being's right to choose to be private. The idea that it's okay to impinge this right as long as someone thought of it as a transaction is morally bankrupt.
None of these things are necessarily the end of the world if there's some data protection guarantee. But to say it's simply a choice ignores how much data other people have on us, just by going about our daily lives.
It's even less acceptable if a company uses its customers to spy on people who aren't customers.
To rationalize it by saying "well, you could have used a different bank" is self-indulgent nonsense.
Generally speaking, this article seems to gloss over the fact that this data is generated by actual human beings; examine for instance this section on ownership:
"And it adds to the confusion about who owns data (in the case of an autonomous car, it could be the carmaker, the supplier of the sensors, the passenger and, in time, if self-driving cars become self-owning ones, the vehicle itself)."
So the person from whence the data was generated doesn't own the data, despite owning the vehicle on which it was generated? Isn't that kind of crazy? edit: Ah I do see the passenger in there now, I'll leave this up because the point still stands that "the passenger" is a very passive way to describe the human being involved.
I don't use FB, but a data-promiscuous person who knows a ton about me recently unfriended my closest contacts. Ignorance is bliss?
That said, privacy is being commoditized for everyone as well with tools such as Snapchat, the Epic Privacy Browser and TOR.
One of these things is not like the others.
I mean seriously, if you own a car, there's a certain level of competence that is expected from the owner, even if it's just the basic stuff like putting the gas pump in the right spot, making sure to get the oil changed regularly, keeping an eye on the various dials and gauges to make sure your engine isn't on fire. There are no such expectancies in a smart phone, they just sell you the thing at a steep subsidy and send you on your merry way, and I find this interesting because you can do far more damage to your life or someone elses with a smartphone than a car.
Could you elaborate? I find it difficult to imagine how one kills somebody using a smartphone, while with a car it's pretty obvious.
I should've been more specific, obviously a car will do more physical damage but I'm talking less about actually hurting someone and more about ruining someone's life.
I think that's a little simplistic. Social media like Twitter and Facebook collapse the context and expand the audience of a joke/meme  until it's almost sure to reach someone who will find it genuinely offensive.
In real life, a joke (good or bad) normally reaches a small number of people at a time. If it's good it may spread and if it's bad it will eventually meet with disapproval that will limit its spread. The online space inverts this dynamic completely by making the spread of someone else's mistake a performative act. Now a bad/offensive joke will spread like wildfire because those spreading it get to stick someone else's name on it and say: "look how bad of a joke this person made".
I find it disturbing that so many people seem to think Twitter and Facebook and the rest are just places to dump whatever thought runs through your brains as though nobody but the people you want to see it will ever see it. But again, that's what things like Facebook advertise as, and that's the type of content Twitter encourages.
The mistake in my opinion rather lies in doing such a strong reaction on a bad joke.
It's surprisingly more difficult than it used to be. Trying to get your server reputation just right so that gmail won't flag you as spam is much tougher than it was in the past, for example.
Speaking of spam - there is so much of it that something like gmail filters out.
I still send from gmail, although I use a personally-hosted (and aggressively firewalled/ monitored) box for receiving mail for things like password resets.
Wait until you hear about mint.com.... ;)
Since I strongly suspect that data's going to get sold by my bank, credit card company, insurance broker, loan provider and so on, the convenience of mint outweighs its disadvantages to me.
Tell that to your email provider..
They don't advertise to me and they let me encrypt my mail incoming, at rest and between my mail client and their servers.
Initial discussion of the "buried lede" in the NYT article Uber CEO Plays with Fire: https://news.ycombinator.com/item?id=14178397
Discussion of the CEO's non-apology: https://news.ycombinator.com/item?id=14181152