Hacker News new | comments | ask | show | jobs | submit login
America should borrow from Europe’s data-privacy law (economist.com)
281 points by kushti 10 months ago | hide | past | web | favorite | 170 comments



Generally I agree but I think the law should take a different slant. Rather then providing consumers recourse after their data has been collected we need to provide individuals the right to control what data will be collected. All devices should have a label describing what kind of data collection they do. This label should be on the packaging in easy to understand human terms (not buried in the EULA). Additionally devices would be put into classified into one of 4 different classes:

Class A - no data collection

Class B - anonymized statistics for diagnostics that cannot be used for marketing

Class C - anonymized usage statistics that can be used in aggregate

Class D - Targeted collection that can be used for targeted marketing

Consumers should have the explicit right to opt out of any and all data collection without risk of impairing the primary function of a device. For example there is no reason a TV should need to be anything beyond class A (maybe B). A smart speaker on the other hand needs to be a B maybe C. Nothing should need to be a class D.


The GDPR is broader than you make it look here. It doesn’t only have provisions for ”providing consumers recourse after their data has been collected”, it also provides ”individuals the right to control what data will be collected”.

It even goes further than your request for an opt-out. All data collection must be opt-in, and opt-in requests must be written in such a way that your users can understand them.

Opt-out in combination with not ”providing consumers recourse after their data has been collected” is especially dangerous. Extreme example: application X installs a keylogger, and sends all your keystrokes to evilEmpire.com. Would you consider that to be OK, as long as there is an opt-out button? Would you be OK if evilEmpire.com kept the keystrokes it collected before you opted out?


Would these be applied specifically to data collected not for the primary purpose of the product? If such a regulation applied to all the data collected by a device / service, I'm not sure the classes you propose would be a good basis for a consumer to decide on which products to use. Let's take a product that stores data on the company's servers as part of the core offering.

Under a GDPR-like system where the purposes of data collection were specified as only fulfilling that core offering, you'd get a pretty good guarantee that it wouldn't be used for personalised marketing etc. Yet on my reading of this system, it'd be a class D since this data could potentially be used for targeted marketing.


Admittedly this is the first time I've written this idea down and it needs a bit more work.

But I stand by the general sentiment of empowering users to decide and telling them in plain language the reasons why their data is being collected.


But I stand by the general sentiment of empowering users to decide and telling them in plain language the reasons why their data is being collected.

That's exactly what the GDPR does. You can use the data for other purposes (e.g. marketing) as long as you request for that specific permission in "clear and plain language".

Citing the text:

"The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used."


How the data is used is at least as important as what data is collected.

I'm fine with a smart speaker securely storing my voice data in the cloud to provide me with personalised speech recognition. I'm not OK with that data being sold to a fraudster who wants to phone my bank and impersonate me. If you classify based on data collection alone, then both these scenarios would result in a "class D" rating.


Yes, It would need to distinguish in each class between technical capacity and reserved legal rights, with stiff penalties for breach of either one. One might use your targeted voice data only to improve your service, but also reserve the right to aggregate it for marketing purposes.


I don't mean to say it's a bad idea by any means, just wanted to provide some constructive criticism. I'd say that GDPR's rules around providing purposes for data collection and not bricking services un-necessarily for refusing such are pretty good; and a labelling scheme similar to that you've proposed to highlight what is being collected outside the core requirements of the service might complement it nicely.


I think an important point that the GDPR gets right is to recognize that you often need to give data to services in order for them to be useful, but you currently have almost no recourse if you stop using it and don't want that data 'out there' any more. Even just the data limited data that services can 'reasonably' need will still paint a very detailed picture of you in aggregate. Forcing companies to allow users to recall consent and actually get their data deleted is the only way to prevent that picture from becoming very detailed over time.


Yes, this, and a second part: GDPR recognizes that data moves between services, and people should have rights about their data that can be enforced on recipients after data is transferred. Currently, agreeing to transfer is a ratchet, where the secondhand party has basically no responsibility to you.


Is there a way to flag yourself as a consumer who always requires personal data deleted when you cancel a service, or is it opt-in on a case by case basis only?


That’s neither opt-in nor opt-out, but stronger. Companies only may have data they need for doing their business. If you cancel a service, accounting and similar laws may dictate a company has to keep your address and payment info for a couple of years, but that’s about it (this actually is covered by the implementation of the “data protection directive”, the precursor to the GDPR, which has been in effect since at least 1-1-1999. See https://en.m.wikipedia.org/wiki/Data_Protection_Directive#Le...)

Student clubs (yes, the GDPR applies to them, too, even if they are 100% volunteer run) that want to keep address info of past members for communicating with alumni, for example, have to make that opt-in.

“necessary for running the business” has some gray areas. For example, an athletics club will want to keep a list of club records, photos of past victories, etc. That’s something that is allowed as “necessary for being a sports club”. I don’t know whether there’s any case history on this)


Essentially the GDPR does this. Barring some explicit opt-in to the contrary, or other laws that take precedence (such as book-keeping).


This would likely have to be paired with a tiered pricing model for said good or service - i.e.

Class A = 100% of retail cost

Class B = 75% of retail cost

Class C = 50% of retail cost

Class D = free

I'm convinced that the vast majority of those outraged by today's data collection practices will conveniently direct their ire to some other cause when faced with the possibility of actually having to pay for the services they've gotten used to having for free.


Sadly this is illegal in EU. You have to offer both Free+ClassD and Free+ClassA versions otherwise its forced consent.

I have written more about this here https://news.ycombinator.com/item?id=16351892


Does this essentially force companies to charge for services in the EU then? The only reason FB, Google, etc even offer "free" services is because they make money on the backend utilizing the data they collect. If everyone can say "don't collect my data but also keep my service free", I'm not sure how a business can remain viable in the EU without charging some type of subscription or micro-transaction fee.

Maybe this is how the whole web-based fee structure should have been setup in the first place but it's not the current deal we have in place. How difficult will it be to shift an entire industry (and consumer mindset) to a vastly different fee structure?


They can still show ads, and they can still be relevant to whatever page the user is currently viewing.

Since they do such a terrible job of it anyway I can't imagine it being less profitable.

And then again, maybe actually paying for something like facebook is the right way to go.


Yeah, that's a good point. If the whole advertising landscape forcibly moves away from direct targeting, presumably the same players retain the same position in the marketplace they have now (with the possible exception of ad dollars moving to other media if web is seen as less effective in general without direct targeting).


Why wouldn't advertising still work without pervasive tracking? If targeting ads based just on content works for TV, radio and print media, why wouldn't it work for online advertising?


Why "sadly"?


Well, I think that is the best part. There is no need nor usefulness of greedy parasitic companies.


Is such tiered pricing allowed under the GDPR?

Because that would provide an easy way out of the whole sitation. Just make the price ridiculously high, and all users will choose the free, your-data-belongs-to-us version.


Would I as a business owner be allowed to kick people off my service for not allowing class D? Because in some cases it really is necessary and why should I host a customer who is only costing money?


Might I suggest an alternative? If the customer is not comfortable with Class D, don't kick them off, require a monthly subscription fee. If they don't like it they will kick themselves off.


The subscription fee should be equivalent to the loss to the business from no longer collecting the data on the user.

It could actually be an eye-opener for many people. Most middle-class tech people would probably happily pay $10+ a month for the service Facebook provides, but if its something like $70 then Facebook is either scarily good at monetizing their information or unreasonably pricing the service.

On the flip side, you shouldn't be allowed to undercharge for the cost of a private membership. Transparency is good, let users be part of the free market for their data and make informed consent.


According to their filings, it's about $26 per user in the US and Canada per year.


I 100% agree with this approach. The problem with GDPR is the following:

- big corporations which do collect the data will lawyer up and still collect data they do

- small companies will do whatever they can do and they will bet that enforcement is going to be sparse

- the enforcement will be non-existent

So nothing will change. The GDPR will not improve privacy, not improve users knowledge who is collecting what, will give false sense of privacy for some users, etc.


Lawyer up if the risk is 4% of global revenues, and the opposing party has the means to lawyer up, too? (The opposing party would not be some John Doe, but a national supervisory authority that decided to bring a case)

And ”enforcement will be nonexistent”, from a EU that has a history of taking on big companies on, say, cartel cases (http://ec.europa.eu/competition/cartels/cases/cases.html) or antitrust cases (http://ec.europa.eu/competition/antitrust/cases/)? Would you take that bet, again at percents of your global revenue?


Doesn't your argument apply basically to any law?

The problem with tax law is the following:

- big corporations will lawyer up and still dodge taxes

- small companies will do whatever they can do and they will bet that enforcement is going to be sparse

- the enforcement will be non-existent

I don't know if you meant to do so, but you seem to be arguing governments are incapable of enforcing law against companies.


They're pretty incapable against large multinationals. The sort that hoard profits offshore for years and years, waiting for the tax holiday. CEOs can outlast multiple election cycles to get what they want.


EU fines Google record $2.7 billion in first antitrust case

https://www.reuters.com/article/us-eu-google-antitrust/eu-fi...

Microsoft fined by European Commission over web browser

http://www.bbc.co.uk/news/technology-21684329

Apple ordered to pay €13bn after EU rules Ireland broke state aid laws

https://www.theguardian.com/business/2016/aug/30/apple-pay-b...

EU fines Facebook 110 million euros over WhatsApp deal

https://www.reuters.com/article/us-eu-facebook-antitrust-idU...


$2.7 billion is just the cost of doing business to Google. They had $32 billion in revenue last quarter.


2.7 b$ is not simply cost of doing business. 32 b$ revenue doesn't mean 32 b$ profit, once you deduct cost you are left with a lot less. Under GDPR the maximum fine grows to >4 b$ and that will hurt even more.


Sure, it ate into the margin however and the share price dropped significantly when the news got out.

Europe is probably having a more functioning government than the US.


I know in the UK at least, big first are taking this very seriously. The potential fines are huge. But it's true that enforcement will be spotty, as the ICO and its equivalents are under-resourced. But you can bet that they will be looking for some big targets pour encourager les autres.


I do think a few "heads on pikes" in the media will make enforcement a lot easier afterwards.


Like Volkswagen diesel emission scandal? Where is the enforcement after that? Car companies can still shop around Eu and get their emission standards certified in Malta or some other less fortunate EU country.


Your proposed classes would lead to everyone just collecting class D, because it's the only one that allows for targeted collection, which is needed even for diagnostics and debugging. In order to improve algorithms, you need to be able to figure out what the user wanted, which means being able to view their session specifically, not in aggregate.


You don't need to view my session to improve my toaster. I currently have a class A television and would love to be able to buy another one in the future. I'm not sure that will be possible, but maybe having a standard label would help.


This exactly. People never believe me whenever I explain that smart refrigerators phone home about use and potentially contents. Then I show then the terms and Eula and almost universally they think it should be illegal.


So make it illegal.


Sure! Where are the politicians who have “enact a GDPR-like law in the USA” on their platform? Hell, where are the politicians who even have multiple words to say about privacy?


Not sure if you're being sarcastic, but the context is bognation's proposal for data collection tiering. Instead of tiering, just adopt a GDPR-like law.

> Hell, where are the politicians who even have multiple words to say about privacy?

In the vein of Manufacturing Consent - the elites in the US are not split on the topic so there is no discussion. Larger companies, especially in SV, run on private data.


As long as users have the right to opt into collection under class d it would be fine. Keep in mind that products were successfully developed for thousands of years without invasive data collection.


to play devil's advocate though, I think product development accelerated greatly thanks to analytics and usage tracking, and lots of mistakes were avoided. This in turn can reduce the price for consumers of those products or features.

Of course, at the expense of privacy, but I'm still wearing my devil's hat here for a moment. And I would go as far as saying people by and large don't care as much about privacy as they do about cost, or getting free stuff (e.g. Facebook).


And a lot of those innovations were just to keep users more immersed while showing them ads. There was little innovation that actually helped end user in the past decade.


It certainly has been abused. But I think it’s throwing the baby with the bath water to some extent.

People won’t be immersed in something that doesn’t bring them value. Some products would make money from users by showing them ads, others would ask those users to pay. Both would want to create immersive experience in my opinion. I would even argue that less immersive experience (getting bored or distracted) leads to ad clicks. Whereas a service that charges wants to keep users truly engaged.


Is “being immersed in the product” a real user need? That sounds like a company goal to me, not something I would want as a user.


When you watch a film, do you want to feel immersed in it? When you play your favorite sports game? I don’t think using a product is necessarily different.

But let’s replace immersed with “enjoy using” or for it to be clear and intuitive. Some product designers and UI experts don’t need usage data, but I think a lot benefit from it to create a better and more enjoyable and clear user experience.


Your point is correct for entertainment apps like video games. Last decade doesn't look like video games golden age though :(

The rest of the apps - the less screen time they get, the better they're. I don't want to be "immersed" in Spotify. I want to find music I want and be done with it.

There's little to help UI design in data. Best way is watching your users (or at least using eye-tracker) and then talking to them. Looking at cold data, you never know why somebody took that long at certain step or kept clicking around. Why did they click wrong button - is it wrong color/positioning, unclear label or they just changed their mind? It's hard to know when users are annoyed or when it's stressful for them either.

Meanwhile data is great if you want to do shoot-in-the-dark A/B testing. Which is good to optimise for "more links clicked". But not so much for relaxed user happily doing his task in quickest way possible.


In most cases product should help user to do what they want quicker and let them go. E.g. if I come to Facebook, I want to get my cat pictures fix and GTFO. Yet it tries to lure me into spending as much time as possible :/


No - I would pay for TV which is class A. Sure the TV manufacturer will need to buy data from company from class D - but they already do that.


On California buildings my isn't display a sign of they contain dangerous chemicals...

All buildings display said warning, because that way they have zero liability.


I know the EFF opposes the right to be forgotten. I'm curious if there are any similar concerns with the GDPR.

The trouble with the right to be forgotten in censorship. The concept is nice, but in the end, the right to be forgotten can mean corrupt powerful people can censor their misdeeds.

I think something similar in the US would be problematic without our freedom of speech. Even if you get a criminal record expunged, anyone who scooped up that data, that was once public, does have the right to hold onto and sell it.

Not to say that's a good thing. It does encourage Labeling Theory, preventing people with criminal records from being able to find legit work (a counter example, the sex offenders registry in Australia is confidential. It can only be accesses for very specific things, like employment at a school).


> The trouble with the right to be forgotten in censorship. The concept is nice, but in the end, the right to be forgotten can mean corrupt powerful people can censor their misdeeds.

That would actually most likely not go through. As a powerful person, you're likely considered a person of public interest, at which point your right to be forgotten is forfeited, because it conflicts with free speech. (A judge will decide whether that's actually the case or not.)


I hope we can differentiate "I want the New York Times to remove an article about me" from "I want Equifax to remove its business records about me because I don't consent to them collecting my data for commercial purposes."

The former would clearly run into 1st Amendment concerns, but I'm hopeful the latter can be allowed without the same concerns. Does the EFF oppose the latter type?


The foundation of the American economy is the fact that the financial industry, and not you, owns the records about your credit history. Disempowering data subjects is essential to making credit history a useful signal about risk. Without that signal, lending would disappear overnight. This would crash home prices and wipe out almost all middle-class wealth. It would also probably eliminate the auto industry and severely curtail retail as consumer credit disappears. A less indebted society might be good in the long term, but that’s one hell of a shock you’re proposing.


You're right about the way these things currently work. But you'll still be able to volunteer information about yourself in pursuit of a loan. Exactly who will validate that information is another question. (Dare I suggest a distributed credit ledger? :) )

There are quite a few startups in the lending space. But most I've encountered rely pretty heavily on existing, underlying infrastructure, i.e. legacy finance.


A Blockchain is much worse than a credit reporting agency. Instead of one entity having your records, everyone does!


Yes. Storing financial records themselves on the blockchain would be pure insanity.

But I’m imagining a distributed reputation rating scheme. There’d have to be some PageRank-analogous feature (I think that’s how PageRank works), so that a high rating from an entity with a high rating is worth more.

Still plenty of issues to remedy... spam, sock puppets, etc. But I’d bet that a distributed credit rating system could be built.


Just eliminate the various liability shields that have been enacted for third-party content. In certain areas (e.g. the personals website crackdown) we are moving in this direction already.

If you make a search engine vulnerable to a libel lawsuit because, for example, their search results make it look like Joe So-and-So was arrested for DUI (when it was actually Joe So-and-Sew or some such thing), they'll just stop indexing that stuff entirely.

Best to avoid creating new laws where old legal concepts will work fine.


Is that really the world we want, though? If Google is liable for third-party defamation, aren't they likely to deindex any negative news about anyone who threatens to sue them?


If it's about a public figure (even a limited-purpose public figure), there's no problem because that speech is already highly protected against libel claims.

If it's basically just non-newsworthy information about private citizens, then I do not think they should be shielded from liability.


GDPR focuses too much on "the right to be forgotten" but is a threat to something more important: "the right to not be deleted".


Can you clarify what you mean?


Given the expense and difficulties of complying with these rules and enforcing them, we should seriously consider the opposite approach of radical transparency.

As the ability to collect and process data becomes cheaper and easier to deploy, it seems to me that trying to preserve an assumption of universal privacy and anonymity trying to swim up a waterfall. Cameras are becoming so cheap they're practically disposable. Facial recognition software and the big data tools to manage all this data are also becoming more widely available. Are we going to legislate against all that? It's one thing to monitor high profile corporations like Google and Facebook, but if surveillance is cheap enough, how do you make sure that no one is amassing reams of private information?

The worst case scenario is that while corporations and criminal organizations continue to discretely gather private data, the rich and powerful will be able to afford the cost of privacy but the rest of us won't have a grasp on who knows what about us.

The alternative to working against the tools that technology affords us is to work with them. In some cases this means embracing radical transparency. We define a narrow range of places that really are private, and assume that anything that happens outside of those spaces is public. For example, what happens inside of one's bedroom is private, but what happens outside of one's front door is public. This information wouldn't be available only to the powerful or well-connected, it should be available to everyone. In particular, society should keep a close eye on the richest and most powerful people. Not necessarily on their private lives, but certainly on their finances.

I'm not arguing that we should give up all privacy. Encryption works and is difficult to defeat, so we should default to encrypting all interpersonal communication. We don't need to give up privacy, but we do need to prioritize what aspects of our lives should remain the most private. I do think that if that we're going to expect twentieth century notions of privacy and anonymity with twenty-first century technology, we're going to have a very hard time of it.


I think part of what I adore most about GDPR is how it gives me, as a user, control over how corporations are storing my data in their database.

I don't necessarily have any expectation of privacy. A lot of it is public data (my email for example). But maybe I just don't want my email to be in a particular database and be used every which way, because I never consented to that particular corporation using my personal data as part of their business strategy.

I'm not just talking about "I signed up to Facebook". I'm talking about "I explicitly didn't sign up to facebook, and yet their Like button is tracking me across the web".

It's also things like the right to rectify that I appreciate a lot. Less so on startups, who nowadays have a clue how to do simple user profile forms, and more so for, once again, big corporations who have decided that real names never change, even if the CSR who signed you up over the phone completely misspelled it.

I don't think GDPR is as much about privacy as it is about control.


> Given the expense and difficulties of complying with these rules and enforcing them

To know what data do you have and to able to manage it is not difficult nor expensive. It is a short-term cost for long-term benefits. The fact that most companies don´t have this in place shows how bad data architecture and governance is around.

> but if surveillance is cheap enough, how do you make sure that no one is amassing reams of private information?

Laws don´t suffice, but there are mechanisms to enforce laws. Whistleblowers protection, in-premise inspections, public claims, etc.

That is hard to assure food safety, doesn´t means that you give up. Time to time you find really bad transgressions, but it was worst before this laws.

> This information wouldn't be available only to the powerful or well-connected, it should be available to everyone.

The problem is the asymmetry of usefulness. For me, to know that you went two days ago to the supermarket and that you have a post about privacy is useless. For a marketing company may provide high value.

There is the risk that the powerful will become more powerful having data that they can use to increase their riches. While the poor will continue as poor as they can´t use the data for anything useful.

> but we do need to prioritize what aspects of our lives should remain the most private.

This is true, thou. To have public salaries, for example, may help employees to have as much information as companies already have. To give access to everyone to some data, can help to improve society. The difference is that I will prefer it the other way around. We protect our data and decide what to show, instead to make everything public and fix what breaks. So, I´m not so far away from your view. But I see it from another perspective.


I see the point but government and the legal framework is exactly what could help swimming against the waterfall. It's hard to say what will happen and how easily it is going to be to loophole this, it might even have just negative results in the end such that large companies would find way to bypass it but smaller ones will be hit disproportionally but I think it is better than not even trying at all.

> We define a narrow range of places that really are private, and assume that anything that happens outside of those spaces is public.

I think it is hard for the average person to even understand what public and private means in various contexts. If government can see everything is it private (even if it is called a "private" chat). If company that is providing the channel can see the conversation, is the conversation really private.

It is a bit like the the FDA and food safety. It would be nice if we didn't need the FDA and everyone could carry their chemical and biological assay kits with them, inspect and test the products and drugs they consume. But most people don't know how, or can't afford to do it. So however imperfect it is nice to have at least some entity, even if it is corrupt and has a revolving door with the industries it is trying to regulate, to set some standards. GDPR is a bit the same, it is not perfect but I still see it as a positive step forward.


"Radical transparency" deployed in places which have not yet been fully liberalised will get people killed. Even in Western countries there are plenty of people who are not "out" because it will wreck their family life.


Radical transparency is far worse than the current situation.

You don't have an exa-scale storage array, and google does. Which one of you is at an advantage if everyone has to share data with everyone else?

Radical transparency is nothing short of digital feudalism, it puts all power in the hands of those that own the storage and processing. Let me now address your needlessly dystopian post one point at a time:

1. how do you make sure that no one is amassing reams of private information?

You license and audit large storage arrays. Peta-scale and above will do as a start. You can detect those remotely from their power draw alone, so they shouldn't be hard to find if you're not phoning in the job. They'll show up on power grid stats more or less the same way large weed grow ops do, and we already hunt those down in most western countries.

2. The worst case scenario is that while corporations and criminal organizations continue to discretely gather private data, the rich and powerful will be able to afford the cost of privacy but the rest of us won't have a grasp on who knows what about us.

Indeed, so why would we deliberately make that a reality? Taking action on data requires storage and processing capacity sufficient to process that data, which no one other than the rich and powerful has. Additionally, transparency laws are only going to reach the edge of your borders, so anything confidential that can be offshored will be offshored to bypass your laws, but only by those that can afford it.

3. For example, what happens inside of one's bedroom is private, but what happens outside of one's front door is public. This information wouldn't be available only to the powerful or well-connected, it should be available to everyone. In particular, society should keep a close eye on the richest and most powerful people.

But in reality, no one except the rich and powerful has space to store footage of everyones front doors, so boots-on-the-ground journalism against the richest and most powerful people will remain exactly what it is: detect/predict first, and then selectively record. Meanwhile, you've just created a law that allows facebook drones to prowl our neighbourhoods, recording as they see fit. Are you even on our team?

4. Not necessarily on their private lives, but certainly on their finances.

That's not going to work any better than it does today. Companies and individuals alike already funnel their wealth through shell companies in tax havens around the world to hide their activity. Those tax havens will not adopt your "transparency for everyone" laws, because their national income is based on hiding peoples financial activity, and your laws have just made that service even more valuable. They also won't sell you the privacy protections they're selling to the elite, because you're probably not rich enough. So all you've done is ensured that ordinary citizens can never access the financial privacy that the rich can buy off the shelf.

5. We don't need to give up privacy, but we do need to prioritize what aspects of our lives should remain the most private.

Sure, but we should prioritize it with a plan of eventually restoring privacy for all aspects of our lives, not with a plan of doing a shit half-job and then going for an eternal smoko.


Google would have to provide it to you.

The way to achieve radical transparency is simply a law that says that if you hold (some kinds of) personal data, you must make it publicly available for free.

There's of course the issue that some things must be kept private (e.g. authentication data, but maybe also things like web searches that are personal but essential to use a service) and drawing the line can be hard.

The issue that this tries to solve is not really "privacy" per se, but rather the existence of entities monopolizing data.


>The way to achieve radical transparency is simply a law that says that if you hold (some kinds of) personal data, you must make it publicly available for free.

That'll work, but the theorem behind it is nonsense. We're trying to prevent people from building up secret stores of data on other people, but where I would force such an individual so caught to delete that data, you would force them to share it.

The enforcement cost is the same either way because it's mostly in the discovery and the prosecution, so where's the savings in sacrificing all privacy in the process? There is none, so we might as well keep privacy. What a silly proposal.


The idea is that users will no longer give services so much data if it's guaranteed to become public (and thus services will no longer require or even ask for it), so the rule will tend to enforce itself.

And if they do, then the data being public prevents those services from gaining a competitive advantage from the data, thus making it easier to compete with them, and resulting in a more competitive market and thus better services at lower cost for users.


For those interested in considering alternatives, I recommend giving the sci-fi book "Queen of Angels" by Greg Bear a read. [1]

That novel follows a police detective trying to solve a crime. A major source of tension is that all of the quasi-public data (public cameras, citizen movements, credit card use) is in the hands of a separate institution called Citizen Oversight. If I remember rightly, it was a separate, quasi-governmental (or non-governmental) body, broken down by region and with separately elected commissioners.

In the novel, the main focus is the relationship with the police, which was very tense; Citizen Oversight was very stingy with data. But you could easily imagine it having jurisdiction over corporate behavior around individual data. And having an active regulator whose job it is to enforce broad principles would have advantages over detailed rule-making fixed in laws. Especially so if they were part of a legally independent body.

It was definitely interesting to think about. And given that it came out in 1990, surprisingly prescient on the topic of data and privacy.

[1] https://en.wikipedia.org/wiki/Queen_of_Angels_(novel)


It's not the law that's the difference here. The clue is under the headline:

> The GDPR’s premise, that consumers should be in charge of their own personal data, is the right one

That's not just the GDPR's premise, that's the very foundation privacy as a civil right in Europe, and has been for a very long time.

The GDPR is just yet another attempt to force companies who have wilfully ignored the rights of millions of Europeans to start complying with laws we already had in place. It's not something new, just an iteration in enforcement.

America should make laws that suit America's values and principles, but as it stands, America has no deep concept of privacy. The GDPR is alien to American values.

(BTW, that quote is subtly wrong but illustrates the huge gap in perception: it should be "citizens", not "consumers"...)


To push back on the premise a little:

The intention behind the GDPR is good, but it still hasn't gone into effect yet, and it remains to be seen what the long-term effects of it are. It's really premature to draw any conclusions about its effectiveness, and history provides us with countless examples of far-reaching regulation that either failed to have the desired outcome, or in fact ended up exacerbating the very problems that it aimed to solve.

With a law as massive as the GDPR, it's going to take several years to really get a sense of what steady state will look like, and there are all kinds of ways it can backfire. I hope it won't, but there definitely is a strong, unfounded bias in discourse towards assuming that the GDPR will succeed in the goals that have been projected onto it.


I'm not a lawyer, but it my impression that the main thing that is different with the GDPR is the threat that it will actually get enforced.

In discussions about the GDPR I see things that are part of Dutch law for years, in some cases dating back to the 1970s.

In practice nobody cared. In extreme cases the data protection authority would say something. But they were mostly understaffed.


> I'm not a lawyer, but it my impression that the main thing that is different with the GDPR is the threat that it will actually get enforced

I think you are dead right. GDPR is an incremental modernisation of the 1995 EU regulation. There have been a number of cases recently that have shown that Facebook, for instance, have been breaking the current EU law, but the national governments (Germany, Belgium recently) have had a hard time enforcing it in any meaningful way. GDPR will allow national governments to enforce their existing laws. If you are a US company who was breaking, for instance, the UK's Data Protection Act 1998 then I have very little sympathy if GDPR now breaks your business model. Breaking the law, but exploiting jurisdiction is not the kind of competitive advantage I will stand up for.

BTW you can't opt out of the law in a EULA.


How will the GDPR allow EU member states to enforce pre-GDPR law? How was it simultaneously law and unenforceable before?


As I understand it, the existing Directive has to be implemented by member states in domestic law. This makes it difficult for one member state to enforce action against a company incorporated in another. As a Regulation, the GDPR is directly binding and can be enforced at the EU level, rather than just at the national level.

In some ways, it makes it easier to comply, because you just have one set of rules rather than multiple national implementations of the Directive.


I don't know about the Netherlands, but here in Portugal, they're pretty responsive. After someone complained, one of my neighbors got fined for posting PII in the building's lobby.


While I don’t disagree, I wonder how much harm we should allow our own citizens to endure in terms of the abuse of their data while we wait for someone else’s experiment to conclude.


Could you point me to some examples of actual harm that people have endured for abuses of their data? Preferably not just single-instance anecdotes, but actual data on the harm that is occurring?

Theoretically “protecting” people is good, but protecting them from what specifically? Health records are already covered by HIPAA, so other than health, what needs more protections? For example, collecting MixPanel or Google Analytics data from a blog — what’s the actual risk of that data? Very interested in real examples and not just hypothetical fears..

What problem is the EU law solving? Have people on Europe been suffering harms until now?


> Google Analytics data from a blog

This is not covered by GDPR, it is a fair use and anonymous.

HIPAA is not a widespread standard outside of the US

People have lost out due to credit card details having been stolen. PCI compliance is a contract between merchant and bank, and not statute law, and therefore we have seen colossal breaches (like Talk Talk ISP) that are hard to punish. The cost of these breaches currently falls on other merchants who have to lose out to fraudulent car use.

Next big co that looses thousands of cards, I really hope they get the top fines, as it is other companies that have to pick up the bill for their actions.


>I wonder how much harm we should allow our own citizens to endure in terms of the abuse

Those same citizens that voluntarily agreed to the EULA? Do you also support the 'War On Drugs' on the same premise?


You can't opt out of the law in a EULA (under UK and AFAIK European law)

Much of what is in GDPR was already illegal under the 1995 regulation, just hard to enforce on US companies


Seems like somewhat of a false equivalence to me. People (largely) know the risks of drugs, and are knowingly taking the drugs. Data is often gathered unknowingly, or used for unknown purposes, and people don't know the risks/potential uses of that data. It also doesn't remove the ability to agree to the use of data (it just must be actual consent to the purposes of use), so it's more akin to a War on Drugs that targets the supply of impure substances, but allows the supply where the user knows exactly what they are getting.


If the GDPR was a War on Drugs, it would be one in which neither the users or the small time dealers/employees have anything to fear, only the gang leaders/shareholders.

I'd support that War on Drugs.


> but it still hasn't gone into effect yet

Actually, it’s been in effect since April 2016, the information commissioners across the EU will be enforcing the regulations from 25 May of this year.


All of the discussion I've seen has been around the right to erasure. The disclosure provisions could have a large effect on employer-employee (or potential employee) relations, or none.

I'm not sure what, if any, of interview notes, performance reviews, or discussions about who to let go in a redundancy are "personal data" of the employee. I'm also not convinced anyone else does either.


I think it's pretty clear that they are the personal data of employees. What is less clear is to who has a legitimate need to have that data and for how long (and hence who can keep that data without consent).


If they're all the employee's personal data, then the employee has a right to a copy. So companies can't legally keep this kind of thing secret anymore -- you're entitled to know what interviewers said about you to management before you weren't hired for a job, what your coworkers said about you that factored into your performance review, email threads about their side of your salary negotiation, etc?


That is already the law, under the 1995 Data Protection Directive.


I think the US implementing something similar is inevitable. If not by the government than by a privacy company (like PCI is for the card industry).

Already I've started to see contracts with credit card gateways include PrivacyShield clauses.

Personally, all products I build going forward will be GPDR and Privacy Shield compliant even though I am in the US. I recommend other entrepreneurs do the same because it is probably easier to consider it now than it is to do it later.

For example (to give context we have PCI requirements to) when someone makes a change to the code we have a impact assessment that needs to be filled out. Among those are the questions:

1. How will this change impact security?

2. How will this change impact customer privacy?

We fill it out for every single change request (even if the answer to both is "It doesn't) just to document that we are thinking about it and engrain thinking about it into the company culture.


Whats to prevent the ticket creators / assignees from simply saying "no impact " by habit?

The danger for these kinds of controls is that you're trained to say "no impact " many times (because there is none most of the time)


This is something filled out by the security and devops team not by the ticket creator.

Also, best practice would be to have "No impact" require an explanation not just simply a two word brushoff.

Edit: Also at some point you have to trust your team, hire the right type of people, and embed it in the company culture that the analysis is something to be taken seriously. If leadership takes it seriously the people filling out the forms aren't going to brush it off.


> The legislation is far from perfect. At nearly 100 articles long, it is too complex and tries to achieve too many things. The compliance costs for smaller firms, in particular, look burdensome.

Not here, but I have seen many comments on other sites that imply this will be a burden on small companies implementing this and worrying about whether they are compliant with some rules that can be interpreted in different ways. Also answering requests for information which range from the benign and can be automated to the letter which caused a stir on linkedin [1] and can be viewed as complex and costly for a small business to answer.

The reason why I talk about small companies, in a lot of cases another already overworked person will need to wear another hat and may or may not do a good enough job. Verses the larger ones, they can implement a small task force and get this out of the way.

I know some commenters on HN would disagree with this and mention that these smaller businesses who don't adopt GDPR should go out of business. But I largely disagree. Businesses which close due to regulations, results in larger market shares to those left standing. Meaning that competition and what largely benefits the consumer dwindles down. Another knock on to this would mean that prices go up, due to those same regulations.

However, what I haven't seen talked about which I wonder if it will make the GDPR moot. Is that Trump is currently engaging in a trade war and I wonder if any lobbying attempts are being made for him to exempt US companies from it[2]?

[1]: https://www.linkedin.com/pulse/nightmare-letter-subject-acce...

[2]: https://martechtoday.com/president-trump-save-us-from-the-gd...


Just one data point, from me as a DPO-equivalent:

My company is squarely in the SMB camp at 21 employees and single-digit millions in revenue across three business lines.

GDPR compliance has already cost us hundreds of thousands of dollars and will cost us more as we go on. There will be some very minor benefits to our customers, perhaps, but for the same amount of money we could 100%-definitely-for-sure-absolutely give all of our customers things they would, if given the choice, trade those benefits for.

That's the thing; it's like when you buy an appliance. You can buy the thing that meets the needs for $X, or you can buy something that's better for $2X. Of course, the better appliance would be better. Should we make a law that requires companies to only make the better one? That law would provide a benefit, because consumers would get the better appliance, right? Okay, sure -- but at what cost? And what value-producing companies (because companies do provide value for customers!) are going to be marginally less efficient and therefore marginally less effective and therefore, on the margin, go out of business because of it?

There are valuable ideas in the GDPR. The execution is pretty crappy, and in the end it I think it likely reduces net consumer autonomy because it gives them less choice in how they relate to companies.


> GDPR compliance has already cost us hundreds of thousands of dollars and will cost us more as we go on

Wow, that's incredible! Can I ask where you are based and roughly how the cost is broken down?

I just can't grok how an SMB would need to spend so much on something that seemed relatively straightforward for my own business.


It seems incredibly excessive to me unless your business model is harvesting personal data. I work at a rather larger SME (multi channel retailer/wholesaler) and we are spending almost nothing. But then we were not doing anything creepy with our customer data before. We are updating some documentation and will ditch some old data we don't need any more, reword the privacy policy on our website etc. I am not apportioning any direct cost to GDPR as these are all things that need attention periodically anyway.


We're a US-based remote company with employees in four countries and contractors in a few others. I have a sense that some of our unique needs make things more complicated for us than for many others, but my other SMB friends seem to be facing a lot of the same struggles.

I go into some more specific categories in another subthread in this article.


You can buy the thing that meets the needs for $X, or you can buy something that's better for $2X. Of course, the better appliance would be better. Should we make a law that requires companies to only make the better one?

Well, it depends. If by "better" we mean it has an extra secondary feature, probably not. But if by "better" we mean it doesn't catch fire during use, then probably yes.

The execution is pretty crappy, and in the end it I think it likely reduces net consumer autonomy because it gives them less choice in how they relate to companies.

Can you expand on this? As far as I know, consumers can still give you express consent to use the data in other ways.


One example would be that consumers are literally no longer allowed to give their data in exchange for a service, download, etc. Companies can still offer these things and ask for the data, but the GDPR specifically disallows companies from making that thing contingent on the customer giving them the data.

In my role as a consumer, I have several services I use and many companies I've given my contact info to in order to get something from them. I love being able to use my information as currency. The outcome of the GDPR's treatment will not be that these companies still create the same whitepapers, services, etc. and just give them away for free. It will be that they move their energy to something else, because the whole point in creating those resources was to get the information, and this will probably cut that ROI for that in half or more. In my business roles, I had active projects to create valuable stuff for clients around that type of thing, and they now don't make sense.


> it's like when you buy an appliance. You can buy the thing that meets the needs for $X, or you can buy something that's better for $2X. Of course, the better appliance would be better. Should we make a law that requires companies to only make the better one?

Maybe? Many such laws already exist: regulations on fire safety of buildings, fire safety of pillows and furniture, energy efficiency and safety of cars and home appliances, anything related to food processing, and so on.

I don't think there are any consumer products at all where governments haven't made laws about only selling "the better one".


I can't really see how it's so burdensome for small firms? It's tauted as a truth every time GDPR is mentioned, but why?

If you ask me, being ready for GDPR is much harder for big companies with legacy systems, bureaucracy etc.

And I find that letter easy to answer. If companies don't, that just makes GDPR even more necessary.


Reasons it has cost my small company hundreds of thousands of dollars:

Hundreds/thousands of hours of time (over a thousand for sure by May 25th, and it doesn't stop there) of very expensive employees to understand what compliance means and work toward achieving it (law/privacy professional, developers, training time for all employees)

New software tools for specific compliance requirements (documentation, etc.)

Consulting and new services needed (EU representative, etc.)

Cost of EU servers (this is NOT required for compliance, but so many of our customers have a bad understanding of what the GDPR requires of them that we found we would lose tens of thousands in ARR if we didn't bring up an EU server stack).

I've spoken with friends at big megacorps, and the challenges are massively different. At our company, we need almost everyone to understand quite a bit of the GDPR. We also can't build giant custom solutions -- like many SMBs, we are essentially a framework built around a chain of dozens of third party services that all have their own GDPR needs, from G Suite to Mailchimp to github to our bookkeeping and accounting contractors. The job of GDPR compliance just doesn't scale proportionately from 1 employee to 1000 -- there's a base level before the scaling starts, and the scaling from there isn't even 1:1.


That's the cost of introducing the GDPR.

Once the GDPR has been around for a few years, everybody in the workforce will (or at least should) know what it is and how it works. The cost of training will only felt when somebody enters the workforce.


Even if you think it's worth the cost, you can't deny that there are still ongoing costs; doing something a less efficient way bears a cost, and more auditing, documentation, and overhead bears a cost.


The cost of doing it is a small negative effect on some (predominantly older) companies' bottom lines.

The cost of not doing it is widespread violation of our fundamental human right to privacy.

It's definitely worth it.


On one hand...

- As someone else mentioned, GDPR are actually easier to implement from a technical perspective in a project from the ground level than it is a legacy project that already has tons of data.

- If you know from the beginning something is not a viable business model than it is easier to shift your business model early on in the company than post-revenue after you already have your business built on personal data.

On the other hand...

- The rules are very complex and sometimes ambiguous leading companies to sometimes legitimately be unsure if they are doing the right thing. And paying a lawyer to tell them is outside the budget of small business.

- A mid-sized product has the worst burden because they need to convert legacy data AND don't have the resources.

- To properly implement often requires a at least intermediate knowledge of devops / encryption / etc which might mean no more MBA bootstrapping v1 after spending two weeks learning mySQL or a coding bootcamp without hiring an experienced Software Engineer.

Personally as a small company I don't mind it. But I'm also a coder. If I had to outsource my code to someone who was being paid hourly and you told me GPDR would add hundreds of hours (not unrealistic) it would be a big portion of my budget with no business benefit (though personally I think a lot of benefit for the customer).


Burdensome? Well, outsource then, to accountable specialists! Capitalism 101. Aggregation could be a service.

If so this law could create a data aggregation giant, where data from many (non-web) sources is combined, many more than today, potentially aggravating the problem.


I expect the solution for small businesses is to outsource to the cloud. Instead of building your own user database or customer management system, have someone else do it who specializes in security and privacy.

If the specialists can handle this well, this is probably not a bad thing. But it's another example of the increase in production values that gives an advantage to larger companies.


I agree. I don't think GDPR would ever be passed in the US because it is way too heavy a regulation and does absolutely disproportionately impact startups and small businesses. By disproportionately impacting small businesses you stifle innovation and therefore competition.

I do think GDPR is a personal privacy win, but I'm also interested to see what happens in terms of tech startups and new products pulling out of the EU.


the cost of GDPR is almost null for new startups. I provide infrastructure (and tech advice) for data analysts and data scientists (mainly, i also have DL and Blockchain projects) and only one project started last year needed more than a day worth of work, mainly because the infrastructure with both Hadoop and Elasticsearch was weird and the dev who put it together was gone.


GDPR seemed unnecessarily overburdensome and limiting last time I looked into it. I don't think we should have anything like it.

I don't really buy this concept that you have a reasonable expectation of privacy on other people's websites and the site owners don't own data collected on their services unless the EULA specifically says something to the contrary.

As a practical matter, if we make it even harder to target advertisements then we'll end up with even more of these "you've run out of articles" type sites. I don't want to have to pay the ISP and then also pay every individual website. Collect all the data on me you want to make it so.


Its a thorny legal issue, and frankly I think there is very little support for GDPR in common-law.

If you voluntarily walk into someones private shop, can you demand that the shop owner doesn't catalog that event? Is there an expectation of privacy while walking on the public street? If you voluntarily agree to receive access to a service in exchange for data collection, can that legal contract be invalidated by decree?

Don't take this as some sort of support for Facebook, I personally have never bought into the idea of social media. Luckily for me, I was a full adult long before social media appeared, so I was able to rationally see that the mass privacy invasion vs "free stuff" calculation wasn't worth it.

Having said that, you can't stop people from voluntarily submitting their data in exchange for services - there is simply no legal theory in support of banning that.


> If you voluntarily walk into someones private shop, can you demand that the shop owner doesn't catalog that event?

Unless you know the shop owner, you would not be personally identified, and yes, in fact, it would be illegal to use technology that personally identifies you when you walk into a shop. The event that _somebody_ walked into a shop can be recorded.

> Is there an expectation of privacy while walking on the public street?

Insofar as no records are made of your movement, yes. It is illegal to record somebody else's presence in a public space, although fair use examples exist (in the background of a personal vacation photo, for example). There are zones with video surveillance, but those are generally clearly marked. The general expectation is that nobody who does not happen to be in the same place as you at the same time knows that you have been there.

That is, in very broad strokes, the current legal situation in Germany pre-GDPR.


> Unless you know the shop owner, you would not be personally identified, and yes, in fact, it would be illegal to use technology that personally identifies you when you walk into a shop.

German law has often seemed silly to me, and this isn't an exception.


The principle is that people have the expectation that their movements in the public space aren't recorded. Anything that violates that expectation is problematic to straight illegal. I don't find that silly, quite the contrary.

I guess software that simply displays your name on a screen, but does not (identifiably) record that fact would be fine, though that would pose the question how the software would connect your face with your name - you would probably have to volunteer a photo for that to work.


Can shop owners in Germany not have surveillance cameras in their buildings? Unless you walk into the grocery store with a mask, you would then be on video and identifiable in some way.

In the US and the UK, almost every business of any value is recording you from the moment you walk in. At the very least, they likely have a camera on the cash register to deter theft. The UK is widely known to record public spaces with some videos being made of following people in London for miles.

Outside of your own home, privacy regarding your physical person is basically nonexistent except in a bathroom stall. In the US, it’s 100% legal to take photos of other people in public without their permission.

I think the barrier to provide the maximum amount of privacy for citizens in every aspect of their lives is too high in most of the modern world. There is simply no precedent for limiting the amount of data that is collected in public that will sway legislators across the world.


> Can shop owners in Germany not have surveillance cameras in their buildings? Unless you walk into the grocery store with a mask, you would then be on video and identifiable in some way.

They can, but you have to be informed of that fact. The business may only use the recordings to investigate a crime, it may not use it for anything else, and they have to be erased after a certain amount of time.


A country that's had both the Gestapo and the Stasi deserves some understanding about laws to prevent privacy infringement. It's not surprising that they're worried about it: they've seen what it can do.


It is illegal to record somebody else's presence in a public space...

So, I can't take a snapshot in a restaurant or on the street if anybody is visible in the background?


You can, as I pointed out in that very sentence:

> It is illegal to record somebody else's presence in a public space, although fair use examples exist (in the background of a personal vacation photo, for example)

If you were to publish that photo, however, you have to get all identifiable persons' permission or make them unrecognizable. That extends to other information usable to identify somebody such as a readable license plate.


Given that the quoted sentence ends with _...although fair use examples exist (in the background of a personal vacation photo, for example)._ I assume your scenario would fall under this


Read the sentence you half-quoted a little bit further.


Doesn't anyone who walks into a shop more than once 'know' the staff? I recognised repeat customers when I worked in retail, even though I didn't know their names, and customised my service to them (e.g "how's the XXX you bought last time?"). That's illegal in Germany?


No, of course not. It's about making records.


> If you voluntarily walk into someones shop [business], can you demand that the shop owner doesn't catalog that event?

No. Rather, the GDPR allows the shop owner to ignore such a request (that "someone" walked into the shop).

If you purchase something, and they keep records for invoicing/tax purposes, your request for erasure of that information can also be ignored.

> If you voluntarily agree to receive access to a service in exchange for data collection, can that legal contract be invalidated by decree?

No. However, you do have a right to ask the service to stop processing your personal data even though this may prevent you from accessing that service in the future.

> you can't stop people from voluntarily submitting their data in exchange for services - there is simply no legal theory in support of banning that.

And the GDPR doesn't do this. Indeed, I can offer a £5 amazon gift-card in exchange for some personal data that I will share with my client, and provided I'm overt at the time of the collection, and I hand over the data immediately (i.e. do not keep a copy for other uses) there's nothing the data subject can do to screw me out of what's a perfectly reasonable deal. Their right to erasure is irrelevant since I'm not keeping the data; they have no right to stop processing because I'm already done processing. And so on.


”If you voluntarily walk into someones private shop, can you demand that the shop owner doesn't catalog that event?”

On the other end of the spectrum: if you voluntary back up your data in iCloud, can you demand Apple to not look inside it? If you voluntarily visit a sauna, can you demand that the shop owner doesn't videotape your entire visit?

The EU very much agrees with you: allowing data to be collected must be a voluntary act, and they don’t attempt to stop people from voluntary submitting data in exchange for services. The only difference is that the EU thinks “voluntary” cannot be implied or buried inside a lengthy EULA, but must be enforced through opt-in.


> If you voluntarily walk into someones private shop, can you demand that the shop owner doesn't catalog that event?

The data protection act has been demanding that since 1998 (in UK).


> GDPR seemed unnecessarily overburdensome and limiting last time I looked into it. I don't think we should have anything like it.

Can you provide some more color here? Without concrete examples this statements is pointless.


Staffing and training a department to answer legal queries regarding how a particular algorithm came to a particular conclusion


Serious question, you do have documentation that explains how your algorithms work don't you? I mean somenone in your company knows how they work?


Consider the common situation where the algorithm is a machine learning model. Nobody knows how it works; the best you can hope for is a bit of documentation for how the algorithm came to be created.


I don't think that is quite the answer. I think you would explain that the algorithm considers the past data on x to make decisions based on x. So then if you say 'we consider factors like your name in order to work out ethnicity and then use that to determine your credit risk' you get a big fine.

If you say 'we use data about the previous likelyhood of people in your profession and age group having an accident in order to analyze the risk....in order to price your insurance' then you are on safe ground


Serious answer, those people are not in the business of responding to legal enquires from customers. Even if they did it is still an example of "burdensome".


1) You write it once and put it in your privacy policy

2) Enquiries would be from people who are a) concerned that the decision you made was unfair and would like it reviewed by a human. b) would like an explanation of the decision.

Personally I welcome this. Sitting in a bank and being refused a mortgage because 'computer says no'[0] with no recourse or reason is tough.

If you are profiling for marketing reasons I wonder what kind of enquiries you are expecting?

[0]:https://www.youtube.com/watch?v=AJQ3TM-p2QI


I am not saying its write or wrong. I agree that it makes sense. I was simply responding to GP asking whats an example of 'burdensome'.


> GDPR seemed unnecessarily overburdensome and limiting last time I looked into it

That's what I assumed before I looked into it, but after I did it actually seemed quite reasonable.

Can you say what parts you find "overburdensome"?


For a few examples,

-The "Right to Erasure" for one - any user can force you to delete some of your data at any time.

-Being forced to appoint a "Data Protection Officer" can definitely be burdensome to small businesses or startups that are already on the margin. More reasons the US startup scene will probably remain stronger.

-Heightened standard for "consent" to use of user data


> Being forced to appoint a "Data Protection Officer" can definitely be burdensome to small businesses or startups that are already on the margin.

You don't have to. Someone needs this responsibility but is doesn't have to be a specific person. For us the DPO only exists as a mailing address for a subject access request, the role is shared.

> -The "Right to Erasure" for one - any user can force you to delete some of your data at any time.

But only for data you have no real use for, or it would be exempt. The flexibility of the wording works both ways.


> The "Right to Erasure" for one - any user can force you to delete some of your data at any time

Sorry, don't see how this is really a burden. Any small business is going to get such requests so rarely that they can be handled manually. In any case, it's also unlikely that implementing a feature to allow users to do this themselves would have any real cost.

As a consumer I absolutely want this right. As a business owner, I absolutely want to do right by end users. I just don't see any issue with this.


Doesn't "right to erasure" make block-chain based business illegal (since you cannot erase from block-chain)?


I agree with you to a point. When I sign up for a Facebook or LinkedIn type of service and give them a bunch of my personal information, I did that voluntarily and I don't think I should really expect a lot of privacy there. Unless there was an explicit promise otherwise.

Then you have cases like Equifax, where they have a bunch of data about me that I never gave them and they are doing a poor job protecting it. You could argue that I did consent to it as it was probably buried in the terms of credit cards or other credit documents but it's a reasonable expectation in my view that such data would be protected particularly since it includes critical identifying information such as SSN, account numbers, etc.


When I don't sign up for Facebook or LinkedIn, but they do get my data because some user of theirs has my contact info[0], that's when they become pretty much equal to Equifax from an ethical perspective.

[0] LinkedIn with its dark pattern of getting access to people's emails and Facebook with its upload of contact lists on older Android devices.


I don't think the likes of Facebook "ghost profiles" are really a big deal. They do provide some benefit to the users of the services.


It's data about me being shared with someone I didn't want it to be shared with without my consent. As such, it's a really big deal to me.

It's also a really big deal that one can't protect (him|her)self from such data being shared with Facebook. That's why I've compared them with Equifax from a moral standpoint, since the parent comment specifically mentioned Facebook and LinkedIn as an opposition to what Equifax was doing.

Whether or not they do provide some benefit to actual Facebook users is completely irrelevant.


Here’s what I’d like: any advertisement I see on the Internet should have a small pictograph/icon/link I can select that tells me—specifically—why I’m seeing that ad. Precisely what data points were used, was it remarketing, was it an uploaded list of email addresses, etc.


I worry about a GPRS-like law preventing innovation, for example because wouldn't it make IPFS-like storage, which relies on duplication and can't remove files, illegal:

[1] https://en.wikipedia.org/wiki/InterPlanetary_File_System


You /can/ implement a deletion mechanism, but you just can't "guarantee" it. I think it'd be up to a court to decide if that would be grounds for winning a case against a potential company that used IPFS (I don't know any that do).


Maybe America should wait a bit to see how it goes before jumping on the bandwagon. There isn't much to gain by adopting these (potentially beneficial) standards sooner rather than later.


There isn't much to gain for who? If regulations will help companies see that hoarding personal data is a liability they will do less of it, and US consumers will have less to lose in each new data breach by companies. That's a lot to gain for consumers!


This assumes that the regulations successfully achieve their goals. Otherwise we've fallen into the old trap "we need to do something, this is something, therefore we must do it".

There is ample possibility that the unintended consequences of GDPR play out in ways the regulators do not expect. Assuming otherwise is foolish.


Absolutely, the EU is willing to conduct this experiment, you may as well wait and see what the results are.

As far as I can see the companies targeted will do their upmost to avoid any impact on their bottom line so its quite likely they will discover plenty of holes in the legislation.


If they do business in Europe at all, they're going to need to comply anyway. GDPR compliance is our biggest development push at the company I work for. We face heavy penalties as a provider of our software in the EU as a saas provider.


America should have data privacy laws too begin with and a way too completely opt out of Equifax etc


As far as I'm concerned the Internet is public infrastructure and you should never expect privacy of your behavior in public places. Besides, "identifying" information should be useless, but it isn't today.

What if we stopped using "identifying" information as authenticating information? PII is only useful because the authentications systems we have in place are such sh*t. Changing this is a much more achievable scope, and would actually address the core value of stolen PII.


In the US is there a possibility of a 1st Amendment challenge? The act of recording information could be seen as speech or publication.

If we take computers out of the argument it would look like this: the government telling people that they can not take notes or make records of information that they hear. Case law has found, for instance, that photography in public (which is making records) can not be banned.


Why do they define small companies using the number of employees or money they make? In today’s world laws should me made based on the amount of data a company has. If they have data on upwards of 10 million they need to comply to all data protection and privacy laws. Companies should and will plan their funding and operations accordingly.


How do you define 10 million pieces of data?


Yeah, that's the hard part about this. If they have the IP addresses of 10 million people, that's probably less critical than if they have medical data on even just 10 000 people.

But the IP collection of how-to-live-with-epilepsy.com might be worse, again, since it implicitly carries the information that you do probably have epilepsy.


Fuck I omitted the word users from my blurb.


Could Americans take advantage of EU protections by using European services?


Yes. Every company with legal standing in the EU has to comply with the GDPR, regardless of what data they process.

Companies outside of the EU only have to comply with it when they are processing data of EU citizens.

https://gdpr-info.eu/art-3-gdpr/


It seems to me like Americans might benefit from EU protections in any case, since corporations have to (from my understanding) apply said protections to EU citizens living outside the EU and those using VPNs to connect from outside the EU.


I don't think the GDPR applies to EU citizens outside the EU; only to people in the EU.

Also, the GDPR doesn't necessarily apply to every non-EU site that has EU visitors, only to those who in some way target EU customers (the rules are a bit ambiguous: https://gdpr-info.eu/recitals/no-23/)

So if someone outside the EU wants to benefit from the GDPR, the best way is to use services by EU companies, as those are required to apply it to everyone.


While I agree with the Economist, the idea that the US look outside its borders for advice is laughable. American exceptionalism and all that.


[flagged]


Please keep generic political talking points off HN.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: