Hacker News new | past | comments | ask | show | jobs | submit login
How Privacy Became a Commodity for the Rich and Powerful (nytimes.com)
233 points by tysone on May 9, 2017 | hide | past | web | favorite | 97 comments



I actually wrote a House Bill for Montana in 2013 that would have been a pretty comprehensive privacy law for the state. My friend Dan tried his best to get it passed, but it was a bit too much new code for legislators to stomach. Thankfully Dan has broken the original legislation down into smaller parts and has really succeeded in improving privacy in Montana.

https://legiscan.com/MT/text/HB400/2013


Well done! With GDPR privacy law in the EU and new ISP data collection legal status, I expect we will have much more work to be done in this area, across the country. Hopefully lawmakers will decide to protect consumers. Thanks for getting the ball rolling in Montana!


Thanks! I'm not hopeful for federal level privacy protections on par with the EU or even other states (Massachusetts being a leader in the US), so I think a state based strategy is the best bet. Especially in Montana as we have a constitutional right to privacy which makes it much easier to pass pro-privacy laws.


I think the article touches upon a key problem: even if some people are in principle willing to sacrifice some privacy in order to get a product for free, it should be required to state what data is shared with whom in clear human language (and not in a 20 page wall of legalese).

The relation between the user and a service is now completely asymmetrical: it is hard to know what your data is used for. It does not help that the legalese often boils down to 'you will sell your soul'.


As Bruce Schneier used to say, give people the option between security and .gifs of dancing pigs, they will take the pigs every time.

As abused as they are, internet users need to build up some healthy "buyer beware" instincts around the tradeoffs. They'll tell Facebook things they'd never physically say in front of strangers due to the bait-and-switch feeling of talking to friends, forgetting the panopticon around them.

I think part of this is the mystery surrounding data practices - people fall for come-ons they wouldn't accept if they understood what was actually happening. So more and louder talk about things like unroll.me is good - if people hear more about others feeling burned by the bait-and-switch, they'll hopefully be more careful, because they see the results of accepting that anodyne "may share with trusted partners" language.


Okay, I'll bite - what kinds of things do I not "understand is actually happening" in terms of what companies like Google and Facebook do with my data? I'm a reasonably technical person and I honestly believe the "tradeoffs" I'm making with companies like that are firmly in my favor.

It's not the first time I've asked people who seemed convinced that I was in some sort of abusive Stockholm-syndrome relationship with Google/FB/etc to explain what exactly is so bad about the metadata packages they're selling. The only convincing arguments I've ever heard have centered around insurance companies denying me coverage based on risk factors I'm revealing, but I'm pretty sure nobody is actually doing that. Everything else basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.


Imagine ten years from now you decide to run for some kind of important political office.

Imagine your opponent had access to every single search query, email and message you had ever sent.

Imagine they had access to all of the GPS data from your smartphone so that they could tell exactly where you went each day, how long you spent there and (importantly) which other people were nearby.

Correlate all that with all of the things you've ever bought because your credit card company sold that info years ago.

Add in all the data from your spouse, your children and your closes family and friends.

Now hand it to the people that are trying to smear your reputation in the worst possible way.

You can see how easy it would be to sway elections this way. Even if right now it wasn't something you cared much about - this data is collected forever and I doubt you know exactly how your life is likely to be ten, twenty, thirty or more years from now.


Imagine ten years from now you decide to run for some kind of important political office.

The real nightmare is when insurance companies have access to your search history.

"We see that you googled X and Y on date D which was before your policy started, so your problem Z is probably a pre-existing condition, claim denied"


Playing devil's advocate: They might expose that a candidate just likes to grab women by the pussy who did not indicate that they'd like that to happen. The candidate might still successfully be elected.


How is that playing devil's advocate? Are you implying that if such a campaign were to hurt a candidate you don't personally approve of, it would be perfectly fine?


No, I'm saying that having even massive dirty like this on a candidate apparently doesn't prevent them from winning anyways. A common response to pussy gate was that you hear stuff like that in any mens room. So digging up dirty from twenty years ago from someone Facebook page will likely be met with the same response.


I like this explanation but would suggest that the more likely scenario is that all that data is accidently dumped onto the Internet for anybody to dig through.


>Everything else basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.

I am not really sure if targeted ads, targeted news, targeted posts, targeted everything in social media is a good thing for a society in the long run. The physical reality is still the same for us; it would be great if we all lived in the same mental reality, too.

On a personal level, while some of the advertising is good (it's useful knowledge to know which online bookstores ship to my country and what are their prices), I've always found the idea of advertising bit scary. A certain kind of optimized stimuli will make me some percentage point more like likely to want to buy a some kind thing I did not want before? Brr. I want more control of what I want.

And some tradeoffs are not about the data. I've also noticed that an infinitely long feed that you can scroll and scroll ... is slightly addictive. I've read that it is so on purpose: all social media platforms popular today make money by advertising, or in other words, by having their users spend their time procrastinating on Twitter/FB/etc (so that they see the ads). I don't think this is a net benefit to individual users or the society as a whole, either.


For sure I'm on the same page as you that the current "media climate" is not necessarily a healthy one for society as a whole. But almost nobody uses that kind of "if everybody simultaneously stopped doing X the world would be a better place, so I'll stop doing X" mentality when deciding on their own behavior. Hence climate change, nuclear de-armament, etc.

The arguments on HN and other places for "online privacy" aren't usually hinging on this tragedy-of-the-commons point. They're usually heavily implying that some very real and personal harm is being perpetrated on the unsuspecting sheeple right now and if only they understood what was really happening they'd be up in arms. I don't think a vague uneasiness with consumerism in general is what they're alluding to.


A specific example of potential 'personal harm' due to highly detailed targeting is when Facebook identifies an individual who likely suffers from depression based on posts and preferences, and then targets advertisements that perpetuate an environment of depression. Anecdotal ref: https://www.vice.com/en_us/article/when-facebook-and-instagr...


> I'm a reasonably technical person and I honestly believe the "tradeoffs" I'm making with companies like that are firmly in my favor.

Have you even bothered to read EULAs and privacy agreements?

> The only convincing arguments I've ever heard have centered around insurance companies denying me coverage based on risk factors I'm revealing, but I'm pretty sure nobody is actually doing that.

https://theintercept.com/2017/04/24/stop-using-unroll-me-rig...

http://www.npr.org/sections/health-shots/2013/01/17/16963404...

Think of it this way: suppose you're working on a project. A competitor (or, really, anyone who doesn't like you) rolls in and wants to discredit your stuff. All they have to do is mine your "private" data for anything that can be put into a bad light. Then, because your clients are "private", they then take that info to your clients and ask whether they want to continue doing business dealings with such disreputable people.

Or worse. They make it public.

Yeah that's not going to happen. I'm pretty sure nobody is actually doing that. It's not like that's exactly what's being played out in high U.S. politics right now...

Podesta/Pizzagate, Trump/Russia, Clinton/pay-to-play, yeah there's definitely nothing gained from invasions of privacy.

Don't worry. You're just the little guy. You won't be affected. Not until it's your boss that decides he needs a scapegoat.


You're kind of proving my point.

- Your first link is about the unroll.me debacle. Perfect example to me of a completely overblown story. They used receipt data to produce an anonymized sum (probably broken down by things like region, etc) of total ride-sharing spend, which they sold to Uber. I guess that kinda sucks for Lyft if they weren't expecting Uber to have such an easy time getting at that number, but it's hardly destructive. In most other industries, there are dozens of well-respected, very rich firms who's whole job it is to use similar tricks to estimate market share, Nielsen being the best example. Is Uber getting Lyft's numbers this way really that different from Wal-Mart getting Target's numbers by getting Nielsen to pay shoppers to scan their Target purchases and give that data to them? (That's called the Nielsen HomeScan Program, by the way). I understand that HomeScan users are volunteering for this in a more explicit way than unroll.me users are, but again, this is a philosophical distinction. Absolutely 0 harm was done to me as an unroll.me user by them giving Uber this number.

- Your second link is the one good argument that I already acknowledged. The insurance industry is 100% built around developing a good statistical model of risk so they can price premiums appropriately for that risk. Nobody bats an eye that insurance rates are different for women vs men, which is after all a genetic difference like the ones in the article. The only difference is that it's an obvious one. It's a complicated question and I can see why this is a good question for courts to consider, but it's not a foregone conclusion to me that insurance companies shouldn't be able to factor these things into their models. Either way, 99.99% of the things people on HN (or in this thread) complain about with regards to privacy has nothing to do with your genome.

- Your other mess of points are all movie-plot conspiracies that are already illegal in other ways. This doesn't convince me at all. I'm not seriously weighing the risk of some international political scandal being played out with me as the scapegoat as a result of Facebook knowing what movies I like. Neither will the vast majority of people. This does not make them irrational.


You're raising excellent points, but I doubt you'll get a satisfactory explanation here.

My brother does digital advertising for a living, whereas I have an opaque sticker on my front webcam. We come from opposite ends of the privacy spectrum, and talk about this issue every so often. And yet, I've never succeeded in convincing him that lack of privacy online will impact him personally. The reason for that is that it probably won't. He's savvy enough, like most people, to keep his accounts private to random Google searches of him. But he uses FB, snapchat, instagram, and all of them heavily. Ultimately, I'm forced to admit that being in touch with friends on FB adds value to his life. The fact that I personally don't care for that doesn't mean he doesn't or shouldn't, and for him, the trade-offs are completely acceptable.


Those who aren't savvy (or rich/powerful) enough are those who would be impacted more by lack of privacy - even if the mechanism for lack of privacy is the user's own lack of understanding of what not to post online or how to control apps' privacy settings. COPPA Restrictions are a good example of important functional control of data privacy for those who aren't full-on techies (like children).


My commentary isn't talking about HN denizens who sit around debating this stuff in their spare time. I'm talking about folks who buy surveillance-TVs 'cause they're new and cool, hook it up to their Wifi because that's what the directions said, and never thought about the idea that they just installed a remote listening device in their bedroom. And sure, probably only robots listen to it. Probably.

As for personal harms, I readily admit that most of my concern is about risk. It is more speculative - as far as I know, there's not been a huge number of targeted attacks on random people yet, and for those that do happen, people focus on the specific details, rather than the data that enabled it. You can write off the attacks on political figures, because you're not that important. But that ignores all the Gmail cracks that compromise other services because enough data about them is floating around to make an attack possible.

But 10 years ago, perople weren't worried about automated attacks on US tax returns, internet-originated bank heists and ATM harvester stories were novelties instead of near-daily occurrences, and so on. What data did I leak somewhere that will come back to bite me next decade?


Since you seem to have decided that your privacy is unimportant, I'd like to let you know that I have not.

I believe my privacy is very important to me, and want the ability to protect it.

The fact remains, even if you don't care, you should still fight for the ability of others (like me) to do so. That is what liberty is.

If you don't, we may all lose that ability. I don't want that to happen, and there are many others who don't either.


Information is power, when is asymmetrical, detailed, indexable and searchable.

History shows how most dictatorships worked very hard at accessing citizen informations while keeping secrecy about their actions. Democracies are based on the opposite principle where public bodies are transparent and citizens are attentive observers.

The same power struggle exist between people and corporations.


> Everything else basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.

Seems like a win for them and a mixed blessing for me. Is there a chance that I'll learn of some new product or service that will improve my life enough to be worth the cost of privacy+cash? Yes. Does it make me more likely to buy something that won't be worth the cost? Yes.

I'd rather reduce my costs by giving them less information and have a higher proportion of clearly-irrelevant ads that take less effort to evaluate and decide to ignore.


> Everything else basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.

targeted ads are not better ads. they can be more effective ads. for the advertiser. not for the consumer of the content they are mixed with because they are harder to ignore or even distinguish.

it's not in the interest of the consumer to be told what they want, and being told more effectively definitely isn't.

also, on a personal level targeted ads creep me the hell out, piss me off way more than bullshit ad noise. I get a very different level of exposure in the EU, though, so the level of creepiness isn't quite as normalised.

also with ads being as ubiquitous as they are, it's weird and alienating to make our ambient experiences not shared ones. it's like that vague unease using someone else's logged in youtube, the alienness of the suggestions, or reading along with someone else's Facebook feed and worrying about the bullshit they're peddled, which for once sticks out because it's not targeted at you. but they didn't ask for that, they were just told what to want.

surely you don't think that the ads targeted at you are different? (even if you do, that kinda undermines the point they are generally win/win)


> basically just boils down to "they get to show you better ads" which honestly seems like a win-win to me.

How do you know you'll get better ads? Maybe the algorithm decides you have the money to pay and you will be presented with overpriced products. I for sure fell victim to the advertising before and bought products that I payed to much for and made impulse purchases that I regretted ordering but still couldn't bother to send those back.


It is not only Facebook and Google. Case in point the recent revelations about Unroll.me, which was used by even a lot of HN regulars without even knowing the way their email and data are used.


>> As abused as they are, internet users need to build up some healthy "buyer beware" instincts around the tradeoffs.

This shouldn't be on the users. The disparity in knowledge between the people running the services and the people using them is huge. The reason a lot of laws (in general) exist is to protect the vulnerable from harm, including harm they don't have the capacity to understand. I think that's an important facet of this debate. It's not just 'free market/free choice' etc. The harm involved in giving up your privacy isn't fully understood by many people so it's up to the law to protect them.


It's also not necessarily anti-market to regulate this sort of thing. Buyers who are confident they're not being screwed and competitors who are prohibited from screwing people are good things, provided you want to provide a product/service and not screw people.


> This shouldn't be on the users.

I absolutely agree. But unless and until something changes, that's how it is.


Though I agree, I also work with users every day who do not have the understanding to build those instincts. They don't understand their email and Facebook passwords might be different and they don't understand why Facebook might lock their account. Some of these users have learning disabilities, some mental health issues, some have simply never had the opportunity to learn how and why things work the way they do. And in my limited interaction, I can't impart that instinct.


> They'll tell Facebook things they'd never physically say in front of strangers

That was something Joseph Weizenbaum was surprised by with ELIZA. He expected that people would regard it as trivial, but many people chose to share their deepest feelings with the program and felt an emotional connection to it. It might have been helpful in this regard that they weren't talking directly to another person.


> As abused as they are, internet users need to build up some healthy "buyer beware" instincts around the tradeoffs

Not having access to what its done with your personal information is one of the things that prevent such an instict from being developed.


We need laws to set standards at whatever the floor of "what a reasonable person would expect you can do" is. Safe, non-paranoid market participation isn't gonna happen otherwise.

Outlaw all the spying. Let non-tracking advertising, community efforts, and regular ol' paying for stuff fill the gap. We'll lose little but regain our privacy, which is gone unless we all do this (see: shadow accounts tracking people because their friends posted about them, Google mining your messages because you sent them to someone who uses Gmail)


Free means you are trading your privacy for the service. Unfortunately that is not common knowledge, but it's how the largest part of the internet works. Time to spread the word more aggressively...


I will have to disagree. In general by paying it means that you have to have some sort of account and that means that they can track you easier. To be frank I believe that the actual solution is to avoid any cloud-based services.


> To be frank I believe that the actual solution is to avoid any butt-based services.

It's a good solution for more than one reason. It also helps you avoid vendor lock-in, prevents from getting stuck with services that get gradually dumbed down in their pursuit of growth at the cost of actual usefulness, and most imporantly, shields you from losing something you depend on because the startup running the service finally got their exit.


Do you have that extension thing that turns cloud into butt? Nice!


Yes, it's been turning Internet honest for me for many years now. I'm so used to it that I forget about it every now and then ;).


There's still any number of ways to track you, however: ecommerce, credit card, ISP, email provider, etc. It would be very hard to live in the modern world without some of those tools, and few of them have any privacy or data protection guarantee attached.


Sure, but you can do a lot. If you pay a dedicated server or colocation provider to host hardware you own, they have a lot less access to your stuff than if you say, use a cloud VM provider, which is in itself at least better than using an SaaS-type solution where you're 1 database query away from fucked and with paid SaaS you're still miles better than free services where selling your data is the only means to pay for the service and to profit.

It really depends on what sort of tracking you're most concerned about though. If you're concerned about facebook or similar having your pictures and messages then a solution like this is a good option. Yes, your provider knows it's you, but unless they've invested serious effort backdooring your hardware, they can't tell what you send over SSL'd connections, they can't tell what people visiting that page are seeing and by and large, and most importantly, they don't care, they see no profit from that.


> Free means you are trading your privacy for the service.

Exactly. It should be well-known that "If you are not paying for the product, you are the product.".


That is true. Also, if you are paying for the product, you may still be the product.


I did not write "if and only if". :-)


No worries - I'm not arguing with you :)


I'm not sure I want to open this can of worm here, but this is one of the appealing properties of differential privacy: there is a quantitative bound you can invoke ("we will mine at most eps-dp from your emails, in exchange for this tasty cookie") and then you don't have to have a complicated discussion about which types of PII they ripped from your email, metadata vs non-, and other increasingly meaningless distinctions.

Differential privacy also has the nice property that it works like a fungible currency: it is additive, so even if you hand out x, y, and z units to different folks, they collectively can't spend more than x + y + z units, no matter how clever they are. In principle this could help folks determine a value for their sensitive data, and perhaps set up decent standards for people to value their own privacy (e.g. "selling 0.1 units of differential privacy from your email could shift your health insurance premiums by 10%").


In what state is the field of this differential privacy right now? Is anything useful already deployed / expected to be deployed soon, or does it require much more research?



Google uses it.[0] I think it mostly needs interested parties; folks who care enough to commit to doing something more interesting with it than just counting things. I've certainly built several tools while at MSFT, they just .. languished.

[0]: https://github.com/google/rappor


I agree that boiling the "wall of legalese" down to some quick bullet points might be a good approach. Within the existing regulatory framework, the requirement could be that the language be clear and that it accurately reflect the longer, less ambiguous legalese.

That said, how much of a culture shift does there need to be on the consumer side? That is, even if privacy policies etc. are boiled down to 3-5 bullet points, how many people will continue to wave them away, and how do we change that?


I'm thinking about the standardized nutrition information labels required on foods we buy. We can easily pick it up and see how many grams of protein, carbs, sugar it has, and read the ingredients. Something like that for privacy would be great.


That is an interesting comparison, especially given how long it took to make an average person even read those once they were on the box (I'm not even sure how many people do).


Yes, or maybe a couple of standard licenses from which a company may choose.



This isn't going to work; I have no idea what you even mean by "what data is shared with whom."

Let's try to imagine how this would work with the example in the article, unroll.me, and the information they sold to Uber. They couldn't have told the people they were going to sell their data to Uber; that deal was certainly made after much of the data was gathered. Even if you restricted it to the future (so you have to know your partners ahead of time, which is practically impossible anyway, but lets go with it) How would you explain that in a non-legalese way, that really captures what you are doing?


> it should be required to state what data is shared with whom in clear human language (and not in a 20 page wall of legalese).

I am not sure there is a human language equivalent of what the legalese says. The company won't even know in advance what ingenious uses of your data they'll be able to cook up.

> the legalese often boils down to 'you will sell your soul'.

Okay, maybe that works.


The idea that privacy can be traded away transactionally is a misrepresentation. Privacy is a choice; depending on who I am interacting with I will withhold certain information about my life.

If someone convinces a friend or family member of the lie that they can trade privacy for services, then my communications with them are compromised without my consent.

This is all about every being's right to choose to be private. The idea that it's okay to impinge this right as long as someone thought of it as a transaction is morally bankrupt.


It's not always a choice, though. If I need to rent a car, I need a credit card. Now my credit card company knows where I went on vacation or traveled for work. I have to have an email account, but if I apply for a job, now my email providers knows quite a bit about me.

None of these things are necessarily the end of the world if there's some data protection guarantee. But to say it's simply a choice ignores how much data other people have on us, just by going about our daily lives.


What I'm saying is that it's immoral for your credit card company to spy on you, regardless of what language they snuck into your contract. Same for your email provider.

It's even less acceptable if a company uses its customers to spy on people who aren't customers.

To rationalize it by saying "well, you could have used a different bank" is self-indulgent nonsense.


Agreed; the article covers regulation meant to foster competition but ignore regulation needed to cover user privacy "rights" over the data collected.

Generally speaking, this article seems to gloss over the fact that this data is generated by actual human beings; examine for instance this section on ownership:

"And it adds to the confusion about who owns data (in the case of an autonomous car, it could be the carmaker, the supplier of the sensors, the passenger and, in time, if self-driving cars become self-owning ones, the vehicle itself)."

So the person from whence the data was generated doesn't own the data, despite owning the vehicle on which it was generated? Isn't that kind of crazy? edit: Ah I do see the passenger in there now, I'll leave this up because the point still stands that "the passenger" is a very passive way to describe the human being involved.


If you read the comments to the California DMV on self-driving regulations, you see this. There are parties [1] which want to track where the cars go, and object to DMV's privacy rules. But they also object to public disclosure of details of accidents and disconnects, and access to sensor data after a crash without manufacturer involvement.

[1] https://www.dmv.ca.gov/portal/wcm/connect/6558e3eb-8d38-427e...


This is a VERY important point.

I don't use FB, but a data-promiscuous person who knows a ton about me recently unfriended my closest contacts. Ignorance is bliss?


"Facebook revoked users’ ability to remain unsearchable on the site; meanwhile, its chief executive, Mark Zuckerberg, was buying up four houses surrounding his Palo Alto home to preserve his own privacy. Sean Spicer, the White House press secretary, has defended President Trump’s secretive meetings at his personal golf clubs, saying he is “entitled to a bit of privacy,”

That said, privacy is being commoditized for everyone as well with tools such as Snapchat, the Epic Privacy Browser and TOR.


> Snapchat, the Epic Privacy Browser and TOR.

One of these things is not like the others.


Hehe, well Snapchat is kind of private in theory...at least people use it because they think it's private!


It's not private in any way, and those who think it is are incredibly naive.


How can one trust someone with their email account. I believe that is just a weird thing to do no matter whether it was not such a threat before. But your email is something personal.


Because the world is hard. People don't don't understand the implementation details well enough to intuit where the boundaries of possibility lie. They don't understand how morally bankrupt the valley is to know "tech" doesn't have a single shard of the restraint that would be necessary to create ethical boundaries.


To be fair, people don't know because they are constantly told by slick products, pretty interfaces and "easy" stuff that tech isn't hard. It's all just magic. Magic stopped working? Call tech support.

I mean seriously, if you own a car, there's a certain level of competence that is expected from the owner, even if it's just the basic stuff like putting the gas pump in the right spot, making sure to get the oil changed regularly, keeping an eye on the various dials and gauges to make sure your engine isn't on fire. There are no such expectancies in a smart phone, they just sell you the thing at a steep subsidy and send you on your merry way, and I find this interesting because you can do far more damage to your life or someone elses with a smartphone than a car.


> you can do far more damage to your life or someone elses with a smartphone than a car.

Could you elaborate? I find it difficult to imagine how one kills somebody using a smartphone, while with a car it's pretty obvious.


I can't remember her name, but it wasn't long ago someone tweeted an incredibly stupid joke about AIDS in Africa, and by the time their plane landed they'd lost their job and had been publicly disgraced so thoroughly they were unemployed for something like 9 months.

I should've been more specific, obviously a car will do more physical damage but I'm talking less about actually hurting someone and more about ruining someone's life.


The case of Justine Sacco. A magical combination of a single bad joke and outrage culture.

https://www.nytimes.com/2015/02/15/magazine/how-one-stupid-t...

http://www.dailymail.co.uk/femail/article-2955322/Justine-Sa...


> outrage culture

I think that's a little simplistic. Social media like Twitter and Facebook collapse the context and expand the audience of a joke/meme [1] until it's almost sure to reach someone who will find it genuinely offensive.

In real life, a joke (good or bad) normally reaches a small number of people at a time. If it's good it may spread and if it's bad it will eventually meet with disapproval that will limit its spread. The online space inverts this dynamic completely by making the spread of someone else's mistake a performative act. Now a bad/offensive joke will spread like wildfire because those spreading it get to stick someone else's name on it and say: "look how bad of a joke this person made".

[1] http://hlwiki.slais.ubc.ca/index.php/Context_collapse_in_soc...


Right. Now whether the reaction was too much or not enough is up to the reader, but what is clear is that simply sending a tweet like that can have ramifications for the literal remainder of your life, she's never going to be completely decoupled from telling that stupid joke as long as she lives and probably a decent time after that, too.

I find it disturbing that so many people seem to think Twitter and Facebook and the rest are just places to dump whatever thought runs through your brains as though nobody but the people you want to see it will ever see it. But again, that's what things like Facebook advertise as, and that's the type of content Twitter encourages.


Wow. I've made jokes 10x worse in meatspace. I'm thankful I grew up before the social media craze.


> but it wasn't long ago someone tweeted an incredibly stupid joke about AIDS in Africa, and by the time their plane landed they'd lost their job and had been publicly disgraced so thoroughly they were unemployed for something like 9 months.

The mistake in my opinion rather lies in doing such a strong reaction on a bad joke.


Well yeah and if you're t-boned by a drunk driver the mistake is the person driving drunk, not you going for a taco bell run. Unfortunately your spine is still shattered regardless of who actually was making mistakes.


You're trusting any free provider of email with your data ... I think it's unrealistic to expect everyone to host their own email servers.


>I think it's unrealistic to expect everyone to host their own email servers.

It's surprisingly more difficult than it used to be. Trying to get your server reputation just right so that gmail won't flag you as spam is much tougher than it was in the past, for example.

Speaking of spam - there is so much of it that something like gmail filters out.

I still send from gmail, although I use a personally-hosted (and aggressively firewalled/ monitored) box for receiving mail for things like password resets.


When setting up my own email server is as easy as downloading and launching Firefox then it might happen. As is, I don't think it's so simple.


Truthy. But That plan didn't even work out for Firefox.


What didn't work out for Firefox? Firefox is used by millions last I checked.


You should try cloudron.io


> How can one trust someone with their email account.

Wait until you hear about mint.com.... ;)


Anything in particular that they are doing with their user data? I am curious.


I don't know. What is their business model? Intuit paid $170M for them, so they're certainly angling to recover at least that much through some monetization strategy. I think it's fantasy to believe that they wouldn't sell your detailed transaction history. It looks like JPMorgan Chase has muscled their way to an exception on that [1], a strong indication that selling data is their MO.

[1] https://en.wikipedia.org/wiki/Mint.com#Controversial_practic...


They are in the most lucrative advertising business: finance. Every time my credit card fee pops up on my account, they show an ad for credit cards with no fee. Insurance, investing, loans, etc, etc. All of the most expensive adwords on Google and they have treasure trove of data on it.

Since I strongly suspect that data's going to get sold by my bank, credit card company, insurance broker, loan provider and so on, the convenience of mint outweighs its disadvantages to me.


> But your email is something personal

Tell that to your email provider..


I do.

They don't advertise to me and they let me encrypt my mail incoming, at rest and between my mail client and their servers.


What I want to know is - How can I buy the information marketers have collected about me?


I've heard, it's usually not in the interest of companies like Google and Facebook to sell raw data. It's much better for them to keep control of the data and sell it's use as a service.


Has anyone implemented something similar to Unroll.me, except not SaaS? Perhaps as a browser extension?


Fake title. Real title (and URL): How Privacy Became a Commodity for the Rich and Powerful


I actually disagree with the real title. People of all socioeconomic classes use these privacy-encroaching apps -- Facebook, Unroll.me, etc.


Last month, the true cost of Unroll.me was revealed: The service is owned by the market-research firm Slice Intelligence, and according to a report in The Times, while Unroll.me is cleaning up users’ inboxes, it’s also rifling through their trash. When Slice found digital ride receipts from Lyft in some users’ accounts, it sold the anonymized data off to Lyft’s ride-hailing rival, Uber.


This was a useful service for many tech people; some here are still discovering this issue that blew up two weeks ago.

Initial discussion of the "buried lede" in the NYT article Uber CEO Plays with Fire: https://news.ycombinator.com/item?id=14178397

Discussion of the CEO's non-apology: https://news.ycombinator.com/item?id=14181152


I can't help but wonder about the competence of "tech people" who are handing random services on the internet full access to their email accounts to save themselves minimal amounts of time.


Hey how else am I supposed to be 10x and have time to write this Rust CRUD app?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: