Hacker News new | past | comments | ask | show | jobs | submit login
Black Hat: GDPR privacy law exploited to reveal personal data (bbc.co.uk)
397 points by jfk13 41 days ago | hide | past | web | favorite | 232 comments



This is horrible. So right now, in order to get access to data for a certain person, you need to hack your way through a few of the potential services he is using and drive from there.

1. The data might have things like IDs (ie: Crypto exchanges).

2. You can use that data to ask for more data. If you got a copy of his passport, now you can ask for more with this new piece.

3. Looks like some people still store passwords in plain text or don't mind exchanging that over email. This means some of these service might reveal a password to you.

4. With that, you can start hacking into other accounts. You also have loads of knowledge about the person, so you might be able to guess his password.

5. Now you have access to his email, dropbox, banking details and maybe even lock him out.

Right about time: https://en.wikipedia.org/wiki/Cobra_effect


One of the major goals of GDPR is to discourage firms from retaining personal data in the first place. It did not used to cost them anything so they kept it regardless of its use. Now that there are big risks to keeping it these firms have to think twice about it.

This "cobra effect" is one more reason NOT to retain personal information in the first place.


And yet other laws require the collection and retention of sensitive user data, in particular any service that allows for transmission of large amounts of money (crypto exchanges were a good example).


That is well covered within GDPR scope. Retaining data to fulfill legal obligations is allowed. One common related example is invoice data.


But then "just don't keep the data" is not an effective response to these attacks of requesting someone else's data.


Yes, it requires people to be trained in this area to make these judgements ... imagine that!


How well is that turning out? You can't rely on people to get it right 100% of the time. I definitely wouldn't.


I signed up to a crypto exchange, then I requested removal of my account and data and they said they cannot delete my data. Guess what? I had zero transactions, the account was new, etc. They are legally obliged to keep almost nothing for 7 years. How lovely. At least they were open about it, right? Some will just tell you they deleted your account when in fact it was just a soft delete. Screw these places. I, for one, hope that Bisq will become popular.


You don’t have to get it right 100% of the time, you just have to look like you’re trying.


Not to mention anything that requires the provision of real-world goods and/or services. If a data protection law ever had sufficient teeth and regulation to cause Uber or Seamless to force a user to type their credit card information and home address every time they wish to make a transaction, it would be wildly decried as paternalistic by the public. And those companies are equally vulnerable to this type of social engineering.

GDPR identity verification as a service would be an amazing thing to have. Articles like [0] bring up a sad irony: in order to verify someone's request to delete their information, you need to obtain information from them in an unusual way that you may not have built infrastructure to easily or automatically delete.

[0] https://www.braze.com/perspectives/article/gdpr-compliance-d...


I'm not sure why everybody here seems to think that sites need to fork over all date they have on store via e-mail if a registered user requests it via e-mail.

A site could easily be compliant by answering general questions (this is what kind of data we have, this is how we collect it, this is what we need it for, this is our legal basis) via e-mail but requiring data exports to be performed via the site itself.

The GDPR actually encourages sites to provide automated self-serve data export mechanisms. The entire point of being able to request a copy of your data is data portability.

"But what if the user never signed up?", I hear some people ask. Why did you collect their data in the first place? If you collect sensitive data like that described in the BBC article, you better have explicit consent and if you have explicit verifiable consent, you should be able to verify a request is made using the same identity that granted the consent (be it an e-mail, a phone call or a signature). So just ask for that again.

Also, if you can't easily comply with a data request because the data is so sensitive and the identity can't easily be verified, you can still explicitly say so. Describe the kind of data you have and offer to delete it, then offer whatever form of authentication is adequate given the level of sensitivity of the data in question should they still demand it.

I'm not sure why some people seem to think this is particularly unreasonable. Just because it isn't code, doesn't mean you have to reinvent authentication from scratch. Think of how you identify someone before you agree to store their data. You already do that for all other business processes, why should data requests be any different?

EDIT: Also if you figure you can't easily verify someone's identity after you took their data, that sounds like a good reason not to take their data in the first place. And that's the entire point of the GDPR: minimising personal data. The GDPR makes personal data toxic and that's intentional. Just like toxic substances you need special precautions for handling and storing it, and you probably want to avoid both unless absolutely necessary.


Which i agree should become a new habit. Yet the mandatory right to access doesnt help


I'll let you in on a secret. For government institutions which in general have huge amounts of information about you and are notoriously bad at security. They don't even get fined with the GDPR. The worst that can happen to them is bad press.

So the institution that has all the healthcare data of all German citizens can not get fined under the GDPR. Same with any other KdöR

https://de.wikipedia.org/wiki/K%C3%B6rperschaft_des_%C3%B6ff...

EDIT: weird, any explanation for the downvotes?


I've heard of local GDPR complaints and enforcement actions (no fines yet, in administrative proceedings) against various state agencies, municipalities and also hospitals, so it does apply to state institutions at least to a certain extent. They have it a bit easier with the reasons for processing, as usually there's an existing law that mandates (and thus allows) the data processing they do, so they usually don't need consent, but the other requirements should apply.

Why wouldn't GDPR apply to german KdöR? I'm not aware of any exemptions in GDPR that could apply to them; governments can make specific local exceptions for national security, defense, judicial process, etc needs (https://gdpr-info.eu/art-23-gdpr/) but Germany shouldn't be able to simply exempt all their KdöR.

One thing is that in some jurisdictions public institutions can't be required to pay fines to the regulator (because transfering money from one gov't pocket to another doesn't make that much sense), however, you can still get an administrative ruling forcing them to change their policies, and if your rights have been violated, then you're entitled to compensation, the "can't be fined" only applies to stuff they'd owe the regulator, not regarding harmed individuals.


> weird, any explanation for the downvotes?

Yes, you're simply wrong. Government agencies do not have a blanket exemption from GDPR rules. There are some difference, and EU countries have some autonomy in the particulars. But as a general principle, the rules are the same: data may only be stored to fulfil a valid purpose, processing and transmission require consent, etc.

Fines don't make any sense in that regard because the government is never fined: first, because it wouldn't make much sense, as fines are payable to that very government anyway. But also because government officials are simply expected to respect court verdicts without the neccessity of fines.

If you don't trust that system you're out of luck, because it's how every single other protection you have against the government is and has been enforced since the inception of "the rule of law".


We had a mayor fined for sending out election mails to a list of subscribers to list intended for other purposes. Not exactly "big government agency" but it still counts.

The downvotes are probably because you failed to cite the laws that exempt government agencies from the GDPR.


It's fairly well cited all over the internet that the EU commission and other European institutions claim they are exempt from the GDPR, after they were found to be in breach of the legislation it created.


This does nothing to discourage keeping data around. A company does not care if they, while following best-effort GDPR practice, release data to a hacker that causes harm to a user. They can simply hide behind the GDPR legislation to say “we did nothing wrong, the law is broken, we were trying our best, we accept no liability”


They are still liable, the waiver is not acceptable under EU law for personal data.

How big a liability it is, is to be decided in a court of law.


Disclosing data to an individual because you make no attempts to verify their identity is in itself a GDPR violation. As far as the GDPR is concerned it doesn't matter whether you were hacked or whether your employees recklessly exposed information to individuals. The only difference is scale and scope.


This is not a problem with GDPR. This is a problem with organizations (companies and governments) treating publicly data as private keys.


Not private keys, secret keys. "A Secret is something you tell one other person [So I'm telling you]".

But yes, that's exactly the problem, the Credit Reference Agencies actually sell this as a service. They think it's a big improvement, and if they're right that ought to be terrifying - what was being done before this crap? "Nothing" is likely to be the depressing answer.

"Hey, we can tell if this is really Dave Smith, because we'll ask "How much did you spend on your credit card in May?" and the real Dave knows the answer, and so do we. Well yes, and so does anybody who saw Dave's card statement, and people at Dave's bank, and the card company, and... also when Dave answers $849.28 that won't match and gets a bad user experience. Oh you mean the _other_ credit card. Yeah, Dave only spent $30.26 on that, he mostly uses it to buy fuel for his jet ski... So you try to solve this "How much did you spent on the VISA ending 4282 in May?" now you've given away a useful fact to an adversary. Idiots.


Moreover, it's a problem with the current state of "identity" as a whole. Most of the data received in the article - passports, addresses, phone numbers, credit cards - does not change very often. Some documents expire, but even then it could be valid for another 3 - 10 years.

We need to move to a system that allows rapid expiry of PII data. Then it will not matter if someone is able to social engineer this data from all these companies. By the time the data leaves the companies HQ, it is already out of date and therefore impossible to use with new services.


I had a thought awhile back. In the vast majority of uses, identity is exactly the issue. Yet in the vast majority of compromises or problems, the problem is correlation and combination of data. By this I mean it seems to me that, say, the Social Security Administration needs to be able to identify a citizen in order to know whether and how much they need to pay a person of a certain identity to avoid paying the wrong amount, the wrong person, double-paying, etc. There does not need to exist an identity which spreads beyond that. Your credit card company does not need to use the same identity and a unique identity which functions solely within the context of the credit card account is all that is needed. Instead, we have identities that get spread across multiple services even though there is never any actual need to relate or correlate the activity across those services. This seems like the sort of situation that cryptography can solve, although obviously there would be a lot of usability work to be done. But it seems to me that cryptographically unrelatable distinct identities which has only 1 possible point of aggregation (you) is what is needed.


I've heard that in Japan, stamping with your personal stamp is accepted (and perhaps sometimes even required?). They have made electronic gadgets that store their stamps as images so that they can directly sign (stamp) an electronic document (using a specific input device).

I think we should have something like this, but with a personal certificate instead of an image. Of course I guess it requires some logistics (lost/stolen stamps, expiration dates, perhaps the stamp should be activated with fingerprints...).


Isn’t this equivalent to stamping PDFs with your signature like we do elsewhere ?

Also the stamp has to be registered to have legal value, which makes it tough to change.

But your idea of signing with the result of some personal certificate is very nice. It can be checked by crypto, different everytime, and wouldn’t matter how it is signed, if it’s easy to reproduce the content etc..


> Also the stamp has to be registered to have legal value, which makes it tough to change.

This is not actually true. Some stamps need to be registered (for example the stamp for corporation), but personal stamps for most applications don't need to be registered -- even for bank accounts. I have several and I'm always forgetting which one I used for my different bank accounts :-P.

One of the strange things about Japanese stamps is that if you let someone have your stamp, then it is considered that you have given them permission to do whatever they want with that stamp. The very fact that they have the stamp means that they are authorised. I got very angry at my previous employer (the government, no less) when my contract was over. They demanded that I give them my stamp I had used for stamping my time card. It happened to be the one I used for my bank account too (because I was clueless at the time!) It took me a couple of months to work around that. If you are ever working in Japan, treat your hanko (stamps) exactly the same way you would treat your encryption keys: use a different one for each application if possible.


As far as I know the registering part is mandatory for legal use but lets the accepting party decide to check it or not.

For instance as you point out for banks you can open an account without any check (you’re giving them money) but you won’t get a mortgage without proof of registration (they’re taking the risk)

At a previous company my boss had his company stamp (a shachihata) in a drawer for us to use when he’s not there. It’s interesting because by the rule of law we would be the one in fault for using someone’s stamp, so it better be for stuff he approved verbally or other ways.


Too bad anyone can access your stamp if you simply lose it. When I first saw the stamp thing for myself, I couldn't fathom how anyone would consider that secure. Better than a signature? Maybe. But easily reproducible and too tangible to consider safe.


See the examples below in discussion with personal certificates and signing keys embedded in gov't ID chipcards of certain European countries, Estonia has this for more than a decade already and now many more countries have something like this.


If GDPR created new vectors of attack which didn't exist before - there's a problem with GDPR even if there are also problems with organizations. Otherwise, you have just created a perfect excuse for any lawmaker: "my law written with good intentions, so not my problem if there are unintended consequences".


Uhm, the corporations handing out private data to the wrong person, are definitely violating the GDPR or probably some earlier privacy law, because you also weren't supposed to give out people's personal data like that before the GDPR either.


The only shocking thing to me is this is the first time I’ve seen any story about this hole in the GDPR. This was one of the top reasons we blocked and deleted all EU users. How do I verify that a request is legitimately from a user, short of them arriving in person and providing some biometrics, which presumably we would need to collect from them in the beginning?

I have no idea. Any system with a high false negative rate is breaking the law, and one with a high false positive rate seems even worse.


That is true. But it is the current state of the world and GDPR enables people to more effectively weaponize that.


How is that not a problem with GDPR? They passed a law which relies on technology which does not exist. There is no way to safely and reliably identify an individual electronically.


Unless you live in Estonia, where each citizen has a private key (on a smart card) and can electronically sign things to prove identity.

Governments could work with big tech players to confirm that certain Gmail/Facebook accounts are linked to one, and only one, national identity. Then through OAuth, you could use that to login anywhere else, proving you are a real person with exactly one ID (which needen't be revealed, just confirming you have exactly one account)


Too bad the rest of the EU doesn't live in Estonia, it would have made GDPR much better.


In my country (Oz) we had a referendum a few decades back about a national ID card, which failed to pass. I for one am against any form of centralised ID system. The basic premise (of the time) was, "if you want to know me, here I am". The government department of Birth, Deaths and Marriages goes to some lengths to ensure that these 3 things are not tied to any one number. Ironically, the government got what it wanted when it introduced a Tax File Number and has bled into some other systems like banking, but thankfully it's not as bad as the U.S's SSN.

I seem to recall reading here on HN a few weeks back how surnames came into existence: it was because the (? Italian) government wanted to track taxes. Before that everyone had several ways of naming themselves: John, John son of Joe, John of someplace, John the carpenter, etc. Personally, I really like that because I'm not just "one thing", but am a person who has different aspects.


You reap what you sow unfortunately. The fact is that the government still keeps track of you but you just have a boatload of downsides by not having a proper system citizens can use.


Any centralized identity system solves a problem we don't have. It doesn't simply serve to identify a person. It serves to aggregate an identity and tie together extremely disparate and unrelated data. It enables a data leak or abuse to not just compromise one service, but all of them at once. If there is a leak of data from, say, a dating site that involves dumping the public keys of the users alongside the user activity associated with it, then the credit card company and electric company and water company and the gaming forum you signed up for and multitudes of other utterly unrelated organizations now have the ability to correlate your dating activity with your activity on their service. The identities on all of those separate systems being the same identity is the problem a centralized system solves. And it's a problem we have never had.


> Any centralized identity system solves a problem we don't have.

You already have one, SSN and similar absolutely count and allow aggregate different data. Not to mention that you're absolutely forgetting about the fact that humans don't have a lot of entropy, k-anonymous data is not what we have by-default. You are wrong about a centralized system "providing a way to aggregate data" it just makes it easier. I live in a country that actually gives citizens access to a centralized identity system and I'd say it has solved much more than you're giving credit for.


Interesting that you consistently refer to the target as "he" as if women weren't a major target of this kind of campaign.


Some people refer to hypothetical people in stories as the same sex as the person describing the story. I'm not positive, but I imagine the parent is also a "he". I don't consider this important at all, and I think you're being pedantic.


I guess I agree it's a form of pedantry, but once you're a bit used to reading singular "they" (and it's hard to escape nowadays) you get used to it, and the opposite starts looking weird. Also, it's pedantry that seems to actually be socially beneficial: https://www.theguardian.com/science/2019/aug/05/he-she-or-ge... (I don't agree with everything being done for "gender-neutral language", especially in German. But this particular case is simple and useful enough in English.)

For whatever it's worth, the reason this tripped me up here was that I had to read the original post several times because I thought that "he" was referring to the attacker (as in the featured article), not to the target. Re-reading it now I don't quite see why I was thinking that.


I use non-specific terms like "they" as often as I can, largely because I don't want my audience to be tripped up with superficial distractions. I don't read into other people's wordings with contempt unless I know for certain they're being malicious. I think the saying goes something like... Don't attribute malice where ignorance would suffice...


Please stop seeing *isms everywhere. It's equally possible that he isn't a native speaker.


You could equally have praised the OP for not stereotyping.


This is pretty appalling, really:

"Overall, of the 83 firms known to have held data... 24% supplied personal information without verifying the requester's identity."

Want someone else's personal data? No need to "hack into" any systems; just ask for it!


This was actually one of the risks we identified when looking at GDPR for my own businesses last year. Given that in some cases all we have is an online account with minimal personal details, how can we possibly verify their identity to an acceptable standard if someone does send us a GDPR subject access request of any kind? If they have some sort of account with us already and that has associated ID and security checks, that's one thing, but what if they don't or they claim to have forgotten their password etc?

Even if someone were willing to send us "strong" ID, we don't have any special knowledge of what official government-issued ID looks like in every country where we have customers, nor the human resources or automated technology to investigate in detail whether any ID that is sent might be faked. At best, we could find an image of a passport/driving licence/whatever from that person's country and see if what they've sent us looks about right and matches any personal details we do have for the data subject.

Our disturbing conclusion was that if someone did ever send us certain types of request in connection with certain accounts, there might be no action we could safely take to resolve the situation that would definitely be lawful. If we get scammed by fake ID then we're breaking the law. If we don't accept ID that is real and comply with the subject request, we're also breaking the law.

Fortunately the affected services aren't doing anything particularly exciting or risky with personal data either, so it seems unlikely that any serious harm would come to anyone whatever happened in our case. However, the same basic issue surely affects many other data controllers/processors, and they won't necessarily be such unlikely targets as the research here shows. I haven't yet found any practical guidance from the regulators on what would be considered reasonable in this sort of situation.


> Even if someone were willing to send us "strong" ID

Amusing that legislation intended to improve privacy is normalising sending IDs to sites one doesn’t trust enough to keep one’s personal data.


I think I have a "legally safe", but slow, solution:

1. The user requests information about them but claims that they lost the password and the email address.

2. The company sends a form (in English) that basically asks for the information on the id (name, date of birth, etc) and the information being requested (ex: password reset).

3. The user prints this form, fills it out and attach a copy of their ID.

4. The user goes to a notary in their country to get the signature and ID verified.

5. The user sends the form to the client's country Ministry of Foreign Affairs to get the notary's signature verified.

6. The user sends the form to the embassy of the company's country to get the Ministry of Foreign Affairs seal/signature verified.

7. The user takes a picture of them holding the form and sends that picture via email or similar to the company.

8. The user sends the form via snail mail to the company.

This process can be shortened if both the company's country and the client's country have ratified the Apostille Convention, formally known as the Hague Convention Abolishing the Requirement of Legalisation for Foreign Public Documents.


Nope, that's not legally safe -- we've been told by data authorities that the verification methods cannot be "overly burdensome" to the data subject.

I live in dread of subject access requests (and thankfully have only had one, and it happened to be really easy to verify).


Maybe you can require this long process only for people that aren't in your country.

I don't know where you live, but here in Brazil having to get documents and signatures verified by a notary is super common (and super hated).

For example: I once wanted to unregister a domain I had. The only two ways of doing it were: don't pay the renewal fee OR get a paper form verified by a notary sent over snail mail to the registro.br office.

--------

Also, you may give your clients the option of verifying their signature at the embassy of the company's country in the client's country. This would skip on that whole Ministry of Foreign Affairs non sense.

--------------

A hacky way of avoiding this "excessive burden" problem is to offer all your users the possibility of linking their public gpg key to their account.

Almost no one would do it, but you gave them the opportunity to do so, thus you cover your ass at least a bit.


IIRC they also allow charging the person - for a "reasonable" amount - for the data retrieval process (which I assume would include the "identity verification" part).

Maybe using the 3-D Secure protocol (especially the second revision) would be enough to unburden yourself for verifying the identity as Mastercard/Visa/American Express supposedly check it for you.

This would work only in some conditions (the data subject should have a card to their name that supports 3-D Secure protocol, and you need to had a complete payment platform to your website/app/whatever) and doesn't solve all the other problems we have (like being sure we are delivering the right information esp. regarding homonyms and so on), but that could be something to investigate.


Just as a fun tidbit, I've grown accustomed to using Estonia's banklinks, and then like approx. 10 years after that the 3D-secure system starts appearing on foreign sites that almost provides the same functionality - it's nice to see finally some steps taken but damn, it's basically 20 years behind what everyone could have had.


The ability to charge is only for either (a) additional copies of data or (b) if the request is "manifestly unfounded or excessive." Given that there's no guidance on (b), that eliminates 99.9% of all SARs.


Government officials created an untenable law in the name of the technological boogey man?! </mild-shock>


Not bogey. Misuse of personal days it's happening daily, but the law is not exactly super great. Better than nothing I suppose.


I think whether the current implementation of GDPR is better than nothing is still up for debate. The intention is mostly good, but the execution leaves an immense amount to be desired (as evidenced by common threads like this) and disproportionately affects/burdens small businesses.


Hey now. Mass surveillance, data breaches, and non-consensual data sale are absolutely issues, and will continue to be. This isn’t a boogey man, this is real, and it’s entirely the technology industry’s fault. Maybe we wouldn’t have to deal with the GDPR if we treated data as a liability from the beginning.


This isn’t a boogey man, this is real, and it’s entirely the technology industry’s fault.

I agree there are real dangers, but I don't think blaming it entirely on the tech industry is fair. The financial services industry has been profiling as much as it could get away with forever. Government security services and the like obviously do this sort of thing as well. Modern technology has made these things easier and surely provides a generous source of additional data to both of the above groups, and targeted ads have created another not entirely welcome variation on the theme, but modern technology is hardly the original source of creepy data-hoarding behaviour.


The email used during registration is sufficient. If you don't have email then username+password.

If they don't have that they don't get access to data. They need to be able to prove who they are and reasonably that is the same information that is used during registration.

If password is lost then tough luck.


> If password is lost then tough luck

This is your personal opinion of how it should work, not GDPR.


I doubt that a black-hat attacker is going to file a lawsuit to obtain someone else's personal information.


But what if the request is genuine?


Then the user will be authenticated by the court, and you will have to make your case that without the court's intervention, you could not be certain of the requester's identity.

This isn't black and white. It is legally ok to question the validity of GDPR data subject requests.


> I doubt that a black-hat attacker is going to file a lawsuit

If you tell someone requesting their own data under GDPR “tough luck, you lost your password,” that could invite remedies under the law.


What are they going to sue for? "I can’t identify myself but still want the data of some random person I claim to be"?


GDPR requires the user to identify themselves to request data.

If a password and username is the only possible way of identification then that is enough. And if one can not provide that then tough luck it is.


I'm not sure it should be this hard. If all you know about a customer is his e-mail address and site password, then you can only verify the user's identity by e-mail address and password. A "strong" ID only helps if you have prior information about that ID stored on your end.

Likewise, it stands to reason that you shouldn't provide information about a user at e-mail request only, without proof that said user also knows the password to your site. You could even create a policy that a user cannot do GDPR requests for up to seven days following a e-mail password reset action, to mitigate the account hijack situation.

As for data processors, who have no direct relationship with end users, the safest policy is to not collect personally-identifying information at all, only use the data controller's surrogate identity. And if you must collect PII, then use that previously stored data to verify the user, not an ID card that you have no way of validating. If you only have the natural address of the user (for example, you're a shipping company), then only offer to send the data to that known address, or make the user prove his residence (e.g. utility bills); if you only know a bank account number, ask for a (sufficiently-redacted) bank statement to that effect, etc.

A government-issued ID is only worth asking for if you can verify it.


> If we get scammed by fake ID then we're breaking the law.

The person who committed fraud by providing a fake ID is breaking the law. I'd think that a regulator would take into account when deciding whether it would be in the public interest to prosecute you.


One option would be to evaluate if it would be safe to delete the data.

In that case you could offer to delete the data. Countries typically have some expensive way to proof identity. So, delete or actually proof who you are. Of course, sending a message to, say, a know email address that to intent to do that helps avoiding angry customers.

If you cannot delete the data because it is valuable to the customer, then just to offer the service you already have to figure out have to give people access to their accounts if they lost the password.


Offering to let attackers delete customer data is not a good solution.


"Hi, I'd like to request all my user data under the username phicoh." --> "I've forgotten the password to my account; would you just delete it instead then?"


This is nothing new. People could make "subject access requests" before the GDPR, and you could charge up to £10 (in the UK) to reply.

It's perfectly lawful to refuse to disclose anything until you have received reasonable evidence of the person's identity and payment (I don't know if you can still charge under the GDPR -- Edit: Apparently you may not charge for the first copy, but you may for further copies).

Considering the potential risks (including bad PR) and penalties it is better to refuse to disclose data because you have doubts rather than to disclose easily.


"but what if they don't or they claim to have forgotten their password etc?"

How do you manage this kind of use case in your normal operation? Does it mean that if you cannot reset your password your account is locked forever? If not how is this process less valid to answer a GDPR request?


> we don't have any special knowledge of what official government-issued ID looks like in every country where we have customers, nor the human resources or automated technology to investigate in detail whether any ID that is sent might be faked.

This sounds like one of the risks of doing international business. If you can't follow the laws then don't play.


OP sounds like he was trying hard to navigate and follow the law? The problem is he ended up finding no reasonable solution that both protected his users privacy while also following the rules, and was disturbed by the implications of it all considering it was supposed to protect them in the first place. Those are very valid criticisms.

Simply dismissing everyone who shows concern about a law as mere law dodgers or foreign bad actors disinterested in following the local law is unhelpful and borderline anti-intellectual.


One of the intended goals of GDPR is to reduce the processing of personal data - not only that the companies should do it differently, but that at least half of the companies who currently have my data really shouldn't have it in the first place.

It depends on the circumstances of each scenario, but it would be completely reasonable if large numbers of smallish companies acknowledge that they lack the capacity to handle personal data properly and the recommended strategy for GDPR compliance is that they should simply stop requesting and storing that data. Yes, it raises the barriers for entry in areas where that data is absolutely necessary. But it also makes companies think twice whether it's really necessary and worth it, and that's a good thing.


> it also makes companies think twice whether it's really necessary and worth it

The simple fact that someone has an account at a service can be private information. For anything requiring even a modicum of persistence, keeping these data is tough to avoid.

I think most of HN agrees GDPR’s goals are good. It was just sloppily drafted, passed and implemented.


GDPR does not care about privacy.

It cares about personal identification. Such as full name and address, and a bunch of other protected things.

Why would any random service request, process or more importantly store these?


> Why would any random service request, process or more importantly store these?

So they can identify users for purposes of complying with GDPR? (For example, to handle the data requests highlighted in this post.)


The GDPR recitals do not recommend gathering additional data solely to be able to fulfill these requests.

See https://gdpr-info.eu/recitals/no-64/ and https://gdpr-info.eu/recitals/no-57/ - the key part is "A controller should not retain personal data for the sole purpose of being able to react to potential requests."


> The GDPR recitals do not recommend gathering additional data solely to be able to fulfill these requests

One, these are non-binding recitals.

Two, the conflict between (a) data-furnishing requirements and (b) advice against retaining data that would validate the requestor is exactly the point of this post.


That’s simply not the law. The official guidance (https://gdpr.eu/eu-gdpr-personal-data/) is that even just an IP address or a license plate number are sufficient to count as identifying a person.


It depends on the circumstances of each scenario, but it would be completely reasonable if large numbers of smallish companies acknowledge that they lack the capacity to handle personal data properly and the recommended strategy for GDPR compliance is that they should simply stop requesting and storing that data.

That's not a reasonable strategy at all. Any company doing more than selling basic goods in person for cash probably needs to process some level of personal data for legitimate reasons. In fact, it will probably be legally required to do so in several respects.

It's all very well arguing that you want to reduce processing of personal data, but I think what you really mean is that you want to reduce processing of personal data in ways you don't like. Some amount of processing of personal data about you is always going to be necessary, and indeed essential to your vital interests and the normal functioning of society.


"Any company doing more than selling basic goods in person for cash probably needs to process some level of personal data for legitimate reasons." is a bit tricky, and I'm not certain that it's true. I'd say that many common business processes use personal data for reasons of tradition, but don't really need it.

1) "Selling for cash" - accepting credit cards and wire transfers (paying by check isn't really a thing in most of EU) doesn't necessarily require you to store PII. Yes, you could get cardholder name, but perhaps you shouldn't; just as you (most likely, for PCI DSS reasons) don't handle CC numbers but delegate it to e.g. some merchant gateway service, if you also delegate CC fraud analysis to them, then you don't need any details about the transcaction beyond the amount and ID.

2) "Selling in person" - delivery is tricky, but you can reduce the exposure a lot by having the information be transient. If you're delivering pizza, then you don't need to store every order's phone number and address forever; and if you don't store the delivery data beyond the delivery, then if someone requests all the data you have on them, then you can honestly say "nothing".

etc. Of course, details matter, and yes, that definitely don't fit all cases, but it's my feeling that they work in half the cases where companies had my data.


"Selling for cash" - accepting credit cards and wire transfers (paying by check isn't really a thing in most of EU) doesn't necessarily require you to store PII.

How are you going to identify the source of a wire transfer if you don't have a customer to match it against?

Also, if you're selling online then anything service-like is already caught by the EU VAT place-of-supply rules (which require verification of the buyer's location and keeping adequate evidence for up to 7 years) and there have been proposals to extend that to sales of physical goods for some time.

If you're delivering pizza, then you don't need to store every order's phone number and address forever; and if you don't store the delivery data beyond the delivery, then if someone requests all the data you have on them, then you can honestly say "nothing".

And that's exactly what you'll have to tell everyone's credit card company when they start disputing charges as product-not-delivered and you have nothing to counter with.

I'm afraid you're being extremely optimistic about how much personal data processing can be avoided in even these everyday situations. And this is before you do anything like marketing, customer relations, logging use of your electronic systems, having any employees, paying any suppliers, etc. I don't doubt that some cases where companies currently have your data could be avoided, but I suspect 50% is a gross exaggeration.


"How are you going to identify the source of a wire transfer if you don't have a customer to match it against?"

Already in current practice you generally don't identify the source, you identify the order # or invoice # in the transfer details and ignore the payer which can be and often is different from the ordering customer (family members, companies paying some bills, etc), the payer information currently gets used only in case of mistakes and such.

My point is that I'd like all these companies to do their best to treat these purchases as if they were anonymous. But your point about VAT rules is a valid issue that might have wider implications about the general necessity to store data.

"what you'll have to tell everyone's credit card company when they start disputing charges as product-not-delivered and you have nothing to counter with." that's absolutely not an issue, that might be the case for card-not-present transactions but for as long as I can remember every single pizza courier or similar would use a wireless terminal to get a chip&pin (or now contactless) card-present authorisation which can't really be disputed in this way.


My point is that I'd like all these companies to do their best to treat these purchases as if they were anonymous. But your point about VAT rules is a valid issue that might have wider implications about the general necessity to store data.

Yes, you still have VAT records to keep. You also need to prove that you provided all required information to the customer under the consumer protection rules or you can end up having to refund everything going back quite a long time. All these protection rules require accompanying record-keeping as evidence of compliance, which unfortunately is going to undermine your hope to make even simple transactions anonymous in many cases.

that's absolutely not an issue, that might be the case for card-not-present transactions but for as long as I can remember every single pizza courier or similar would use a wireless terminal to get a chip&pin (or now contactless) card-present authorisation which can't really be disputed in this way.

Do you often go into a pizza place and witness a card present transaction to pay for a delivery order? :-)


> Do you often go into a pizza place and witness a card present transaction to pay for a delivery order? :-)

Yes, that's exactly what I'm saying, almost all the pizza/goods delivery/taxi/whatever are card-present transactions; whenever I order something like a pizza for delivery, the delivery dude arrives at my door with a wireless card terminal and I pay with my card present (chip+pin or contactless if the amount is small) upon receiving the pizza, and this has beeen this way for so many years now already that I don't remember when they switched from the earlier model, now businesses such as these usually don't have card-not-present acquiring contracts, possibly for cost or fraud reasons as both these things are worse if you have card-not-present permitted.


>you don't need any details about the transcaction beyond the amount and ID. //

That's a lot of trust in the merchant services. "What transaction?", then if all you had was a transaction ID what do you do?

Also, to process refunds you need to have payment details.

In your second case only store the details if people explicitly want you to. You can do repeat customer discounts by sending vouchers for a later order with the present order confirmation. (You can't restrict discounts to those who give up their PII if I'm reading things right.)


What do you mean by ""What transaction?", then if all you had was a transaction ID what do you do?" - that's a common process in online stores, you make a contract with an acquiring bank with the default scenario that you won't be processing transactions yourself (as most smallish customers can't or don't want to handle full PCI DSS compliance), then you (or the bank) contracts with one of the merchant gateway providers, and whenever you forward them a customer order session and get back a confirmation token, then you verify it afterwards (usually next morning after closing of business day) with your bank that you've got each transaction. Or something similar, details may vary - but it's an established process that works for thousands and thousands of companies without any significant trust issues. Yes, the gateway has to be trustworthy - in part that's the service they provide, to handle card data in a trustworthy manner because you don't want to be required to be trustworthy because doing things in a trustworthy manner is complicated and expensive - secure facilities, regular audits, four-eyes principle, separation of duties that's infeasible for smallish companies, etc, etc. You also have to trust your acquiring bank, and Visa/Mastercard network, that's also part of the deal.

"Also, to process refunds you need to have payment details." this is false, merchant gateways (in general, not all of them) can also execute refunds (or recurring payments) without you having any sensitive details but just that same transaction confirmation token they give you when the initial transaction was made, I've written code relating to these processes.


Well, in our shop [no longer open, was a micro-business] someone comes in for a refund, but doesn't have the receipt, there's no way to process a refund except to open the safe and get the receipt because you need to refund the same card and you don't know the card without keeping some record of it.

We've had transactions that failed to upload but were processed normally on the [mobile] card terminal, and we had to give the merchant services details to complete the processing of the transaction. Sometimes a transaction would fail during processing [cardholder not present (CNP), via phone] but we wouldn't realise until the phone was down, so in some cases we contacted the customer (our business required contact details, it couldn't run without them; this was pre-GDPR anyway). Other times the bank network would be out, so we'd be unable to process transactions without keeping customer data (temporarily).

>as most smallish customers can't or don't want to handle full PCI DSS compliance //

The banks were real bastards for this. Despite providing us mobile card terminals that don't connect to local network they required us to pay for PCI compliance audits of local equipment or pay a penalty amount [you could audit it yourself, took me about 8 hours of reading documentation to establish the protocol as they apparently didn't want us to do it but wanted us to pay instead]. The PCI stuff was basically a hidden-cost scam AFAICT.


> This sounds like one of the risks of doing international business. If you can't follow the laws then don't play.

Sure, but while technically correct, this comment doesn't add anything to the conversation. The question isn't whether or not GP should need to follow laws -- it's "what laws should we be passing, and what are the effects of those laws?"

It's in Europe's best interest to protect their citizens, given that it currently looks like ~40% of the market is more frightened of refusing a GDPR request then they are of leaking PII.

There are lots of ways the EU could address that problem -- harsher penalties, revising how IDs work across member states, releasing more resources for smaller businesses, clarifying more broadly that refusing a GDPR request because of lack of identification is OK. All of these directions have pros and cons.


Frankly, no one has the resources to do this reliably, and the identity theft consequences are potentially enormous.

It's fairly easy to get a reasonably convincing fake ID. High schoolers manage it all the time.


That seems weird, from what I see in Europe, ID forgery is quite rare; making a convincing fake ID is about as difficult (or more difficult) as making convincing fake money, as pretty much the same anti-counterfeiting measures are used; and carries about the same consequences as counterfeiting money. It's possible, but not cheap or widespread - there are a bunch of ways how you could get money or stuff with a fake ID, but it's very rare in crime statistics; it's more common to see real IDs (of e.g. homeless people) used in such fraud cases, or having an accomplice employee making fake contracts without IDs.

Perhaps it's the issue with the lack of a good centralized ID in USA so you have all kinds of different "ID-like" documents, and some of them are insecure?

There's also less motivation for teenagers getting fake IDs - I assume in USA the 21-year alcohol limit is a big driver; but here it's not that big of an issue, plus it's easier and cheaper for a teenager to buy e.g. heroin than a fake ID, and being caught using a counterfeit ID is a crime that risks geeting jail time while buying/possession of small amounts of drugs won't; so using a fake ID is difficult and risky, and so it happens not that often.


Fake IDs in EU don't really work, since anyone who really cares (banks, police, ...) will just read the info from the machine readable zone and run that against the relevant database. So EU IDs are more like a web-server session cookie, you can't just make one up.

There are online services who also provide this for a cost: https://www.idcheck.io


Those just analyze photos for photoshop artifacts for a CYA receipt. They don't verify that the ID info is real.

That's next to useless under identity fraud / attacks any more sophisticated than MS Paint level skills. You're severely underplaying how easy it is to fake documentation and the attacks it enables.


I had to verify my identity for an online service a while back. They used a third party company that has a mobile phone app essentially for video conferencing; you then call this company via the app and talk to them. They ask you to show your face and move around and to show your ID, including moving it around so they can check that the security hologram (this was an EU passport) is indeed one. So for this kind of check MS Paint skills would not be enough by far.

I guess they had no way of verifying that the ID info is real, but apparently this process was trustworthy enough for their client.


This sounds like something vulnerable to real-time deepfakes in the very near future.


Maybe, but you can say that about anything that is not in-person face-to-face communication along with physically handing over the passport for inspection. I think the process was pretty rigorous for the current state of the art, and infinitely better than the normal approach of mailing around PDFs into which I have pasted a scan of my signature.


Remember that a GDPR request is going to typically come electronically. You won't have a physical item to examine for all its anti-counterfeiting features - you're probably going to have a photograph from a mobile phone to look at.


To me it sounds a bit dubious that you can both have a valid reason for keeping personal information about someone and not have a valid way of verifying that you are actually communicating with them.

What sort of agreement can you enter with someone if you don't know who they are?


> To me it sounds a bit dubious that you can both have a valid reason for keeping personal information about someone and not have a valid way of verifying that you are actually communicating with them.

Suppose I run a matchmaking site that stores a real name, username, and sexual orientation for registered users.

To avoid over-collecting data, that's all I require. I don't need strong verification at this stage because there's little risk if someone creates a fraudulent account, and collecting strong verification can add other data security risks.

However, if someone demands the data already in the system, it's of a presumably real person. And revealing whether user X is gay can be very sensitive information.


I don't think that what you describe is incompatible with GDPR.

In general GDPR allows and requires you to use 'all reasonable measures to verify the identity', and in your particular scenario requiring the same authentication that you usually use would be considered reasonable, and it's likely the only possible reasonable measure - if what you say is all you store, then it's impossible to distinguish between two different John Smiths making such requests, and https://gdpr-info.eu/recitals/no-64/ recommends that you should not request more info just for the purpose of these requests.

In this scenario where the information was provided by the users themselves (if you had collected them otherwise from third parties, that'd be a very different issue) simply putting a link "download your data here" on your site for authenticated users would be a reasonable solution; and it matches https://gdpr-info.eu/recitals/no-63/ recommendation "Where possible, the controller should be able to provide remote access to a secure system which would provide the data subject with direct access to his or her personal data."


Yeah, there is some weird subtext to the argument, that for some reason account access shouldn't count as secure verification?

I guess this all hinges on the idea that to implement GDPR all you need to do is set up an email adress and handle all requests manually, only to then discover that: actually, identity management via plaintext email is a bit tricky.


Yeah, there is some weird subtext to the argument, that for some reason account access shouldn't count as secure verification?

It's not in dispute that account access using known good credentials would be reasonable verification. But people forget passwords or mistype email addresses or lose access to email accounts, and they still have legal rights as data subjects under the GDPR.

So, it also matters whether other forms of identification are acceptable. If you have some reasonable means of confirming someone's identity in response to a request under GDPR and you try to avoid using it because the person didn't follow your preferred method based on standard account credentials, it's not clear that regulators would accept that as reasonable. And any time the words "not clear" appear, they come with an implicit threat of severe penalties in GDPR world.


This does seem quite explicit - "If you have some reasonable means of confirming someone's identity in response to a request under GDPR" then you have to use them even if they're not your preferred means. In the scenario proposed above the company wouldn't have to confirm the identity only because they can't.


If someone loses their password and asks for their data under the GDPR it's an open question whether the site can simply say "sorry, there's no longer any way for us to verify you are who you say you are".


Ah, that's actually simple - the answer depends on whether that statement is true or not.

In the abovementioned case where only name (which is not unique) and password and sexual orientation is stored, and literally nothing else, you can say "sorry, we took all reasonable measures to verify your identity, and couldn't" because that's how it is.

However, if google or paypal or someone else who stores much more data says "sorry, there's no longer any way for us to verify you are who you say you are", then that's a lie (easily verifiable by the regulator), and that's not acceptable.


Is it? What paragraph makes it ambigious whether a company needs to do more than reasonable efforts establish if a request is legitimate?


Why would you need a real name? You shouldn't collect it.


So you say you store sexual orientation connected to a real name, that this information is sensitive, and that you have shoddy account security?

If someone wants to download account information they just have to log in and press the button on the appropriate page, and either get the data directly or else some special token to prove account ownership.

If account access is not secure enough, then what business do you have storing the information at all?


What sort of agreement can you enter with someone if you don't know who they are?

Have you ever walked into a shop, bought some chocolate with cash, and walked out?

There you go, legally binding contract of sale, yet the seller has no idea who the buyer was.


Oh all right, I grant you can enter into agreements that the immediately terminate without any need to establish identity.

I was hoever not making an argument by induction, so leaving out the trivial case is not crucial to it.


It might help if you gave a couple of example situations that people could answer for.


Remotely we generally use digital verification, my ID card has a secure chip that's usable for online authentication (available to third parties by redirect through a gov't website) and for e-signatures of digital documents; so I could send a GDPR request in a PDF where the recipient can verify that this indeed was signed by Name Surname ID123. However, things like this are still a bit fragmented between different EU countries, some harmonization would be really helpful for cross-border businesses and it will probably take many years.

What I'm probably saying is that the question "is a GDPR request-er providing valid ID" is solvable; it's not solvable easily (yet) but this BlackHat experiment shows that large companies already can do it internally and smaller companies probably can outsource it to someone who can do the required integrations/processes to match all the EU states.


If you can't follow the laws then don't play.

That's an unhelpful argument if there is no reasonable way to determine what the laws are and/or to comply with them.


> If you can't follow the laws then don't play.

This is the subtext of the GDPR. Bluntly, they only want HugeCos providing services on the internet, anyone worth fewer than around 10 digits can suck it.

This isn't limited to Europe, of course. Most governments are coming around to the idea that there are too many printing presses.


Verifying identity is a very hard problem.


Understood, but:

So, when one train operator asked for a photocopy of a passport, he convinced it instead to accept a postmarked envelope addressed to the "victim".

A postmarked envelope? This is baffling to me.


The art of social engineering. The person doing this was a dab-hand at it or else they would not have done what they did.

You can have a plausible excuse for not having your passport - left at parent's house or accidentally in storage.

Driver's license - "I am a cyclist!"

Electricity bill - "My housemate pays all the bills!"

Bank statement - "I only do online banking!"

Electoral roll? - "Don't vote!"

But HR sent me a letter about an update to the company pension plan - will that do?

Yes! - sends through postmarked envelope picture. By that stage the confidence trick has succeeded so 'we know the guy and he is legit'.

You can get this far to get a mobile phone contract which counts as good as a utility bill in terms of supplementary proof of ID. This can then be used to get a deposit only bank account for 'savings'. This can then be just about enough ID to have almost created a person!!!

But with GDPR social engineering you can have it all so much easier.

There was so much panic about GDPR and purging every decent contact from the company newsletter list that no company thought through a thing about how to deal with GDPR information requests. Hubris at its finest.


That should not lead to an "open by default" policy.


Maybe European regulators should have considered that before writing this law.


They did by writing the eIDAS regulation.


From a business’s perspective it's a lot worse to get sued for not complying with GDPR rather than for a side effect of complying with it.


This is a reflection of the fact that we have no good way for someone to digitally prove their identity. Some countries are getting close-ish - Denmark's NemID system, for example, is used by a lot of financial institutions.

However, there remains no easy way to make ad-hoc verifiable statements like 'I am John Smith and I authorise you to send this data to xyz@example.org'.

Governments, please solve this problem! Essentially: combine NemID with Keybase and build a UI that normal citizens can understand.


Italy's "PEC"[1] (Posta Elettronica Certificata, or Certified Electronic Mail) comes pretty close.

There's even an RFC[2] for it.

In order to get one, an individual has to prove their identity via a government-issued ID (ID card or passport), and that email address can henceforth be used to send emails for all official correspondence as if it were certified/verified mail, with the added bonus that both parties "know" the identity of the other party (i.e. the company knows it was very likely me who sent it, as they trust that the people in charge of having verified my ID did their job) and with the added bonus of the _contents_ of the email also being certified to have been sent from that recipient to that destination address, and not to have been tampered with (great thing to have for lawsuit reasons), unlike "standard" registered mail, who only certified that a letter has been sent and picked up.

[1]: https://en.wikipedia.org/wiki/Certified_email#Italy [2]: https://tools.ietf.org/html/rfc6109


Estonia's ASICe containers have solved signing any electronic files/documents. Italy is reinventing the wheel a bit. ASICe as such is actually an official EU standard since 2016 and EU law¹ enforces that such digital signatures have to be accepted in every EU member state. Estonian digital signatures have used another standards before though, first Estonian digital signature was given in 2002.

The ID-card also provides S/MIME should someone want to use it but people usually just use ASICe.

[1] - https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/What...


Denmark has a similar service called eBoks, but it's a proprietary SaaS product rather than an open standard. What's the user experience like with the Italian system? Does every company support it?


> This is a reflection of the fact that we have no good way for someone to digitally prove their identity.

Centralized identity isn't a solution, it's the problem. Once you implement something like that, it requires everyone to track everything using it in order for it to be used to authenticate access to the information. Which means everybody has to store the ID number as a field in every database and it becomes a de facto primary key that allows all information to be correlated by every blackhat that compromises more than one data set.

Meanwhile there will be the "just make it work" people who are bad at security, who will do whatever is necessary to compromise their own security because the attacker told them to. And then we would then be giving attackers the capacity to take over their entire lives instead of only one relationship with one entity.

Moreover, the scope of the damage if someone were to compromise the central identity system in general rather than only for a specific person is horrifying. It would become a single point of compromise for the whole country. And the worst kind on top of that, because everything would hook into it which would cause it to become ossified and difficult to update. If the system was then publicly compromised, how long does it take for everyone everywhere to update every piece of code to use the replacement? Which thing do you do in the meantime, continue using the compromised system as all hell breaks loose, or shut down your entire country?

There is a better solution. If you have an account with someone, you make requests by authenticating in the same way you do with your account. And if you don't have an account, you should be able to request deletion of the data associated with e.g. your IP address, but not request to download it -- because there is no way to to verify your identity for that. Even with centralized ID an IP address can be used by multiple people who shouldn't be able to give consent for one another and may not be mutually distinguishable by the party receiving the request, and the same for most other global data (e.g. many people share full names with other people). The only way to make centralized ID work in that context is to tag everything with it to begin with, compromising all anonymity and pseudonymity -- which can't possibly be the right trade off for what is supposed to be privacy legislation.


> Governments, please solve this problem!

Fun fact, the EU has made a law that should force countries to implement their PKIs for people to be able to digitally sign documents and that those signatures are equivalent to hand written ones.

https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/What...


Latvia's ID cards support e-signing of digital documents, so I could have a pdf "I authorise you to send this data to xyz@example.org" and sign it so that the recipient can securely verify that this was signed by Name Surname ID123.

Estonia has it quite similar, I'm not certain if it's technically the same standard or something slightly different.


By EU law it's basically the same.


The Netherlands uses DigID, which is effectively a federated identity provider. Problem is that it was originally intended for government use only (recently it's been expanded to include health insurance providers), and it's not accessible for commercial entities.

It was also marred by very bureaucratic policies, for example to get information on account usage (e.g. how many times was my account used, from which IP, to access which site) you would need to file a police report first, and it supported 2FA very early (through SMS service, now via mobile app as well) but the end user could not forcibly enable 2FA, only the target service could determine if they wanted two-factor authentication on their site. Luckily, those have been fixed.

Would be nice if they supported open standards like OAuth though.


I gather the Dutch company behind DigID, digidentity, is also one of the companies behind UK's "Verify" (used on gov.uk).

Barclays, Post Office Ltd, and Experian are the other options -- if memory serves these 3 all had major security breaches.

I recently had to apply for a criminal records check ("DBS") and the government's DBS (Disclosure and Barring Service) required me to give up all my ID to one of those companies "to identify me" before I could apply; in case someone who was not me was applying for the information.

Aside, it seems they could have allowed anyone to apply (and pay the £25 fee) but only sent the response to a known address, which they could cross check from my tax record and driving license, and ... which details have to be kept current by law.


> Governments, please solve this problem!

I would prefer governments to solve it with competent Software Engineers in the mix and maybe other professionals from the finances and IT security industries, but never one single large entity.


I didn't take that as he wanted legislative representatives and the like to solve it, so much as to make it a matter of focus to enlist the kinds of people you're recommending to provide a solution like this.

Personally, I really feel we need a private/public-key kind of system in place for certain things like Social Security Numbers (SSNs) in the US. Such that if a leak also includes an SSN (essentially a public key), it could then be regenerated and some apparatus to manage filtering it to the parties it was assigned to prevent needing to redistribute it manually as a consumer/user (like using your Google, Facebook, or whatever OpenID as credentials for another site and it is stored in the account showing that it is being used as such). If they could automatically detect said public keys being exposed and automate the resetting and distribution process, even better (users should still be able to manually do it if they feel it is compromised though).


SSN should be just an unique number per person, nothing more, nothing less, never used for identification, only for having an unique number per person. It's impossible to ensure such a number doesn't leak so there also must be 0 consequences in that happening. That would require major legislative change.


Of course. There are plenty of different ways to build such a platform (or contract it out) - I'm not advocating any in particular.

The UK has a rather interesting example in the GOV.UK Verify service, which federates the identity verification and authentication out to third parties. The user can then choose which provider (e.g. Barclays, Post Office) they want to prove their identity through. Experian is also one of the providers, which perhaps illustrates some of the flaws with this design...


In this case, the solution is easy. The user most likely already has an account, so just ask for the account password. If the users claims they lost the password, then do a classic password recovery via email.

Of course it's tricky for organizations storing data about users without an account. (eg. Facebook or Google, not sure how they could handle that at all, even with government ids)


The only way to do what most people want here is to bake this digital identity into humans, which is beyond our present technology and also feels potentially rather like a recipe for authoritarianism. For one thing, if you can lose it, people will. They'll destroy them on purpose, they'll be stolen by crooks, they will forget them at airports or in hotel safes. That's not a problem for baked in things, people don't leave their hearts behind (outside of Country songs) or (outside of maybe China) get them stolen by crooks but any conceivable device, card, key, or document will have this problem.

Greg Egan's "Orphanogensis" (http://www.gregegan.net/DIASPORA/01/Orphanogenesis.html a short story setting up the protagonist for his novel "Diaspora") describes making Polis Citizens (people who exist only as software, the other branches of humanity have given themselves bodies suitable for long term existence in space, the Gleisner Robots, or given up on consciousness altogether, the Dream Apes) with a cypherclerk, a component that does public key crypto. Once the system achieves confidence that the process of making a new citizen has been successful and has produced a conscious person, the cypherclerk is initialised and the new citizen has a unique and impossible to fake proof of ID. This plays no major role in the story, it's just there because presumably Greg agrees with you that it'd sure be convenient if there was actually digital ID. But there isn't.

Here's a central conflict: I would like to be able to prove that I'm who I say I am, but without being stuck with that identity. This makes the identity disclaimable. You will find plenty of people who feel the same way, and some of them have very concrete practical reasons (e.g. people with stalkers or who ratted on a crime boss). But for a bunch of things people, and especially governments want to do with a digital ID that's no good.

A disclaimable ID can work for a driving license. Barry Shitpeas is licensed to drive an HGV, you can either prove you're Barry Shitpeas, or you can get a new license to drive the HGV with the identity you do want to use.

But if Barry drink drives, taking away his license, knowing he can just get a new one as Jerry Poocabbage tomorrow, well that's a rubbish outcome, isn't it? We want a way to _stop_ Barry from driving even if he changes identity. And we can't do that with disclaimable ID.


In my opinion, we need an open protocol for this. Every government having a separate digital identification tool is a poor solution.

The simplest solution I can imagine is to mimic the solution used to digitally identify companies via HTTPS (certificate authorities), but modified with the intent of identifying a person rather than a company.


Hasn't Estonia solved this with their national ID smart card?


I have never used it, but I do get the impression that Estonia is pretty far ahead of everyone else. I don't know if this particular use case is natively supported (but would be interested to know). I suppose it would be possible for a third party to build it.


Latvia also has ID cards with embedded certificates that can be used to sign documents/login. These cards are optional (passport is mandatory, ID card can be requested) for now but will be required from 2023. Estonia/Latvia/Lithuania also have SmartID (although this is a private company service) which is an app on your mobile device and is used for logging in banks. SmartID account is issued by the bank (lowest level, can only log in issuer bank) or by yourself using your ID card (highest level, can log in any service that supports SmartID as well as document signing).


Yes, it's officially supported by providing everyone the possibility to sign documents with it. It was made an EU-wide standard in 2016 https://www.etsi.org/deliver/etsi_en/319100_319199/31916201/... but previous iterations have existed since 2002.

The person you replied to is pretty much right, Estonian ID-card has solved 99.99% of authentication and signing problems for it's citizens, the support is mandated by law and very widespread. There are a few flaws but those are minor compared to the softly put clusterfuck rest of the world is dealing with.


Flaws like this? https://arstechnica.com/information-technology/2017/10/crypt...

I support such uses of smartcards, but we have to be disciplined regarding our assumptions about non-repudiation.


I'd rather not start compiling a list of password-username database thefts, credential stuffings, identity thefts, forged paper signatures, the time lost to inefficient paper procedures, secrets stolen due to how hard it is to encrypt things etc. etc. etc.

Of course we have to be disciplined, but other things can't even remotely reach the security such a solution provides. Your comment has very FUD-y undertones, rising concern about a very minor thing if you actually look at how much it solves and how much better it is compared to other widespread applications.


I'm extremely enthusiastic about smartcards, even trying to build a startup around making it easier to deploy and build services around smartcard-based authentication and key management. I agree that in terms of overall security they're incomparable to the existing mess. But, fair point--I flubbed attempting to articulate a tangentially related concern.


Yes. A lot of Estonians use it daily to log in to their bank accounts, give digital signatures, deal with government business, check their health data etc. All you need is your ID card and your PIN codes.


Sweden's BankID is quite good too.


BankID is horrible.

It's coupled to your phone's OS, so as it becomes even more mandatory you're stuck carrying around an iOS or Android device, even if your primary phone is something like a Librem 5.

It also doesn't support anyway to delegate access, either to people ("my partner should have access to this bank account") or computers ("I want to back up my incoming govt. messages automatically").


Not to mention that all providers of Mobile Bank ID (which, as you mentioned, is by far the most widely supported one) are private companies, mostly banks, which have no duty to take you on as a customer.

If you for whatever reason can't or won't get an account with a Swedish bank, you're effectively cut out from large parts of online services.

I really think the government needs to realize they should provide ID issuance digitally to ensure that no one is left out.


They already did realise: https://www.regeringen.se/pressmeddelanden/2019/03/utredning...

The existing Skatteverket ID card contains an e-ID (from a private provider) as well, but alas only Skatteverket support it for some reason.


> BankID is horrible.

The rest of your post does not back up this claim IMO.

> It's coupled to your phone's OS, so as it becomes even more mandatory you're stuck carrying around an iOS or Android device, even if your primary phone is something like a Librem 5.

At least here in Norway you can get a standalone hardware 2-factor key.

> It also doesn't support anyway to delegate access, either to people ("my partner should have access to this bank account") or computers ("I want to back up my incoming govt. messages automatically").

I will happily admit banks aren't too good with this, but it isn't BankIDs fault.

BankID only handles authentication. Once I'm logged in I can easily delegate access to my account using my banks self service feature. If your bank doesn't let you it is their fault.

As for why it cannot be used by a machine I guess the reason is that use of BankID is considered the same as signature. So signing with someone elses BankID is considere forgery.


> At least here in Norway you can get a standalone hardware 2-factor key.

You can get the key embedded on a smartcard, but it's still coupled to their proprietary driver (which only works on Windows or macOS, of course).

It's also a separate API and not as widely supported as Mobile BankID.


> You can get the key embedded on a smartcard, but it's still coupled to their proprietary driver (which only works on Windows or macOS, of course).

Maybe it is different where you live but the standalone hardware key I mention is standalone: You open the website, enter your national id, find your token generator, enter pin code for hardware token, read your token, type it into the bank web site, and enter your password ib a different field.

Nothing of this is linked to Windows or Mac, only to a reasonably modern browser.

And the thing you describe seems to be very very different and I'm confident there's only one BankID product in the Nordic countries, so either it is implemented in a very different way with your bank or we aren't talking about the same thing.

> It's also a separate API and not as widely supported as Mobile BankID.

Around here hardware tokens were supported before mobile BankID was even a thing. They are still available everywhere I log in.

---

As for why I care, I find that BankID is a good idea, reasonably implemented, so I don't think it is OK to trash it - unless I'm misunderstanding something, in which case I want to learn.


> Maybe it is different where you live but the standalone hardware key I mention is standalone: You open the website, enter your national id, find your token generator, enter pin code for hardware token, read your token, type it into the bank web site, and enter your password ib a different field.

That sounds completely different. "BankID På Kort" is basically the same experience as using an OpenPGP smartcard: it prompts you to insert the card, enter your PIN, and everything else is handled in the background.

Many banks here (and at least Nordea used the CC for this) also support manual challenge/response auth like you describe, but this is unrelated to BankID and seems to generally be considered deprecated.

> And the thing you describe seems to be very very different and I'm confident there's only one BankID product in the Nordic countries, so either it is implemented in a very different way with your bank or we aren't talking about the same thing.

From what I can tell, Swedish and Norwegian BankID are completely separate. NorBankID seems to be operated by Vipps[0] (a consortium of norwegian banks) and have existed since 2004, while SweBankID is owned by Finansiell ID-Teknik[1] (a consortium of swedish banks) since 2002.

They also don't share the logo, or seem to have any ties between their websites.

> Around here hardware tokens were supported before mobile BankID was even a thing. They are still available everywhere I log in.

Each bank generally had their own hardware tokens since before BankID (and still do).

Government services usually also support Telia's NetID (which is similar to BankID På Kort, but at least seems to provide a Linux driver).

However, BankID is also starting to become popular for services that would otherwise have been fine with plain old username/password authentication, rather than implementing U2F or TOTP. These services usually don't put a lot of thought into their implementation, and don't tend to implement alternative auth methods. Older services will support username/password for existing users, but expect it to be considered deprecated. Examples of this category would be Hallon (mobile network), Hemfrid (home cleaning service), or Kivra (crappy email without the federation).

> As for why I care, I find that BankID is a good idea, reasonably implemented, so I don't think it is OK to trash it

Don't let decent be the enemy of good.

[0]: https://www.bankid.no/privat/om-oss/

[1]: https://www.bankid.com/en/om-oss/about-finansiell-id-teknik


> That sounds completely different. "BankID På Kort" is basically the same experience as using an OpenPGP smartcard: it prompts you to insert the card, enter your PIN, and everything else is handled in the background.

Agreed, sounds completely different. We had some smartcard id system here as well, but last time I saw any of that in practical use for banking was 10 or so years ago, and even then it was standalone (battery driven, small enough to fit in my pocket) and not dependent on a PC with proprietary OS.

> Many banks here (and at least Nordea used the CC for this) also support manual challenge/response auth like you describe, but this is unrelated to BankID and seems to generally be considered deprecated.

I see. Around here this is official from BankID and it seems you need a "proper" physical BankID to even log into hour bank to issue a mobile BankID[0].

> From what I can tell, Swedish and Norwegian BankID are completely separate. NorBankID seems to be operated by Vipps [...] and have existed since 2004, while SweBankID is owned by Finansiell ID-Teknik[...]since 2002.

The Vipps situation might be a bit confusing, Vipps wasnt created until a few years ago but it is more or less universally loved by everyone, so it seems they have used that name to cover everything.

>They also don't share the logo, or seem to have any ties between their websites.

You are right. Good point. I'm not sure anymore that they are related.

> Government services usually also support Telia's NetID (which is similar to BankID På Kort, but at least seems to provide a Linux driver).

Around here the government have their own (MinID), but yu can also use buypass, COMMFIDES, or Norwegian BankID. I never see anyone using anything except MinID or BankID though.

> However, BankID is also starting to become popular for services that would otherwise have been fine with plain old username/password authentication, rather than implementing U2F or TOTP. These services usually don't put a lot of thought into their implementation, and don't tend to implement alternative auth methods. Older services will support username/password for existing users, but expect it to be considered deprecated. Examples of this category would be Hallon (mobile network), Hemfrid (home cleaning service), or Kivra (crappy email without the federation).

With a broken system that by design isn't cross platform like you describe this seems like a bad idea, yes.

>Don't let decent be the enemy of good.

Agree. That said I find the Norwegian implementation is good now after they got rid of the applets a few years ago, and even back then I was able to somehow get it to work - and it was about as broken on Windows as well ;-)

Today it Just Works across alll devices I use for work as well as at home: Linux laptops, Windows laptops, iPad, Android phone and probably whatever else as long as it has a any modern browser (I use FF mostly, but it seems to work in everything from IE and Edge to Opera and Chrome).

Summarized I guess the Norwegian BankID is good, and the Swedish one is bad?

[0]: https://www.bankid.no/privat/kom-i-gang/


Ugh, my Austrian bank is currently trying to force me into using a system like this. The "standard" way is via an Android or iOS app, the "alternative" is via a smartcard reader thing that seems to work with Windows only.

They claim that this is mandatory due to some EU regulation, but they conveniently forget to say what regulation that is supposed to be.


It's the Payment Services Directive (PSD2). Username+PW is obsolete and insecure at least 20 years now.


> It's the Payment Services Directive (PSD2). Username+PW is obsolete and insecure at least 20 years now.

That does not imply that banks must implement 2FA with their proprietary applications.

Banks could just implement TOTP (Time-based One-time Passwords, RFC 6238) or HOTP (HMAC-based One-time Passwords, RFC 4226) and let me choose how I generate my OTP. For example with an hardware OTP generator or an open source application.

Most banks are using PSD2 as a occasion to force their privacy-invading apps on their users.


Absolutely not, I heavily dislike SmartID and similar proprietary spyware as well. A TOTP HW token would be in my opinion more secure. The reason banks use it though is the convenience, having some identity tied to the apps is just a bonus for them.


You can install it on your PC or Mac instead if you want, and it is not the only e-ID in Sweden (alas the others aren't so widely adopted, but that should change if/when the new government ID happens).

Access delegation not being part of BankID itself is a feature not a deficiency, it would undermine the concept of secure digital ID if someone else could digitally impersonate you with it. Instead, services can choose whether they let you allow someone to log in for you.


> You can install it on your PC or Mac if you want.

1. Same problem. There is still no Linux client, for example.

2. It's a different API. Most services specifically require Mobile BankID these days. Desktop (regular) BankID won't work there.

3. Many applications are only useful on the go, such as Swish.

> Access delegation not being part of BankID itself is a feature not a deficiency, it would undermine the concept of secure digital ID if someone else could digitally impersonate you with it.

Different kinds of services have different security requirements. If multiple people live together then it makes sense that any of them should be able to send requests to their cleaning service.

There is also already a clear precedent to allow delegation of access that require strong authentication IRL. For example, PostNord allows you to retrieve someone else's mail as long as you provide ID for both yourself and the recipient. You can issue a Fullmakt which authorizes someone to take legally binding actions on your behalf. Hell, you can even vote by delegate.[0]

Why should those rights not extend into the digital world?

> Instead, services can choose whether they let you allow someone to log in for you.

And they don't, because they are lazy and think BankID has magically solved all of their authentication woes.

[0]: https://www.val.se/servicelankar/teckensprak/satt-att-rosta....


> It's a different API. Most services specifically require Mobile BankID these days. Desktop (regular) BankID won't work there.

I'm not sure which point you're referring to with the mention of differing API. Mobile BankID is the same API as regular BankID (but services can choose which they accept); as for other Swedish e-ID services, there are aggregator services.

Re: most, really? There are some that don't accept non-mobile BankID, but it is uncommon in my experience.

> There is also already a clear precedent to allow delegation of access that require strong authentication IRL.

Sure, but what does that have to do with BankID itself? Both in-person and online, it is the service that decides whether someone else can act for you. And various services do offer this: I can let others access my tax records or prescription medicine records online.

I don't think it's unreasonable for services to want to know who they're dealing with. In real life, if someone else shows my ID at postnord, they have to show their own ID too.


> but services can choose which they accept

And therein lies the problem.

> Re: most, really? There are some that don't accept non-mobile BankID, but it is uncommon in my experience.

You already refuted this yourself in https://news.ycombinator.com/item?id=20656033.

> I don't think it's unreasonable for services to want to know who they're dealing with. In real life, if someone else shows my ID at postnord, they have to show their own ID too.

BankID could have designed their API such that "person performing the action" and "subject the action applies to" are different fields.

But even that is unnecesary. Who actually requested the action only concerns the subject, not the third party carrying it out. BankID could simply log "Delegate A requested entity B to perform action C on behalf of subject D" and show it to D, while keeping B completely in the dark.


It would likely be very problematic for the security model as it is crucial that you only sign your own requests that you understand what they do. I guess you can technically sign someone else's request made with your identification number today as most service don't use QR codes for presence verification. In general I am also not sure that further expanding, and blurring the line, of what you can do with BankID is a good idea. If anything I would like more limits.


> There is also already a clear precedent to allow delegation of access that require strong authentication IRL. For example, PostNord allows you to retrieve someone else's mail as long as you provide ID for both yourself and the recipient.

They have the same service in their app with BankID + QR code (at least for packages).


My point is that BankID should have something similar for any BankID action.


It wouldn't work, because services using BankID want a (presumably contractually-obligated) assurance that only that particular person is using the service. If someone else can be authorised use your ID, it undermines that.

They could still add such a feature of course, but they would need to inform and have the co-operation of services when someone else is using the ID, so it wouldn't be widely supported.


It would be awesome if the could do it like:

Person A initiates and signs request to delegate for Person B. Person B receives the delegation request and signs it, which produces a positive response. The response (containing Person B's signature= is then 'wrapped' in Person A's request and is only sent on the to destination.


>Same problem. There is still no Linux client, for example.

There was[0]. Maybe you can revive it, since it is a pain-point?

>It's a different API.

Having read the BankId specs, they're "different" APIs in only in how the session is initiated. You're still challenged to enter the PIN for the certificate, which prompts the response to the authentication request. In other words, BankId and Mobile Bank Id are presenting the same exact set of data back to the session initiator.

>Most services specifically require Mobile BankID these days. Desktop (regular) BankID won't work there.

I have, as of yet, to run into anything that would not take BankId or Mobile Bank Id.

[0] - https://fribid.se/


> There was[0]. Maybe you can revive it, since it is a pain-point?

No. This is a problem that was intentionally caused by Finansiell ID-teknik, and I'm not going to get into cat-and-mouse game to fix their for-profit product.

> Having read the BankId specs, they're "different" APIs in only in how the session is initiated. You're still challenged to enter the PIN for the certificate, which prompts the response to the authentication request. In other words, BankId and Mobile Bank Id are presenting the same exact set of data back to the session initiator.

They're different in that a service that takes Mobile BankID does not automatically support the desktop version, or vice versa. Whether the API is similar after that doesn't matter.

> I have, as of yet, to run into anything that would not take BankId or Mobile Bank Id.

Off the top of my head, Swish and Hemfrid only accept Mobile BankID, with no alternate authentication options. Swedbank accepts Mobile BankID or their custom OTP hardware token, but not desktop BankID.


>No. This is a problem that was intentionally caused by Finansiell ID-teknik, and I'm not going to get into cat-and-mouse game to fix their for-profit product.

You're - literally - on "Hacker News" complaining about the lack of a product. You were given one that you could easily fix to suit your aforementioned needs/demands and was a principal complaint against BankId but, instead, you want BankId to write the Linux app for you because... ...profits? This makes no sense; especially, when you would already have a hefty baseline of code to work with.

>They're different in that a service that takes Mobile BankID does not automatically support the desktop version, or vice versa.

Does not automatically doesn't - implicitly - mean that it's disallowed. You seem to be confusing the two concepts, here.

>Whether the API is similar after that doesn't matter.

We're - literally - talking about the API being the same because you made it a point that it was different. Either it matters or it doesn't. Pick one.

>Off the top of my head, Swish and Hemfrid only accept Mobile BankID, with no alternate authentication options. Swedbank accepts Mobile BankID or their custom OTP hardware token, but not desktop BankID.

O.k.? We're still in the swamplands of those are problems created by the app developers and not BankId, correct..? I'm not sure I'm following how the app designers' decisions are the fault of BankId...


> You're - literally - on "Hacker News" complaining about the lack of a product. You were given one that you could easily fix to suit your aforementioned needs/demands and was a principal complaint against BankId but, instead, you want BankId to write the Linux app for you because... ...profits? This makes no sense; especially, when you would already have a hefty baseline of code to work with.

That's like saying jailbreaking makes iOS respect the consumer. It's a temporary hack that makes it tolerable to use... for a few days until the next mandatory update comes out and breaks everything again.

You won't get lasting improvement without changing the mindset of the powers that be.

> Does not automatically doesn't - implicitly - mean that it's disallowed. You seem to be confusing the two concepts, here.

You're saying service developers can put in the work to support both desktop and mobile BankID. I'm saying most of them are lazy and don't bother. Those two statements are not incompatible.

> We're - literally - talking about the API being the same because you made it a point that it was different. Either it matters or it doesn't. Pick one.

Similar and compatible are very different things.

Then again, this whole subdiscussion is irrelevant anyway, because desktop BankID is still crap for the same reasons as Mobile BankID.

> O.k.? We're still in the swamplands of those are problems created by the app developers and not BankId, correct..? I'm not sure I'm following how the app designers' decisions are the fault of BankId...

If BankID had spent more than two seconds designing their API boundaries then the app developers wouldn't have had to care about supporting both in the first place.


>You won't get lasting improvement without changing the mindset of the powers that be.

I'm confused: Do you believe that whining about it on HN will accomplish that?

>You're saying service developers can put in the work to support both desktop and mobile BankID. I'm saying most of them are lazy and don't bother. Those two statements are not incompatible.

I am, am I? I thought I was saying that you're being assumptive because that's actually what I was saying: You're equating a potential to an emphatic and it doesn't work that way.

>Similar and compatible are very different things.

Now you're just being obtuse and ignoring the entire premise that they're the same - much less have you decidedly chosen whether it matters or not.

>Then again, this whole subdiscussion is irrelevant anyway...

Then why are you replying to it? I think she doth protest too much!

>If BankID had spent more than two seconds designing their API boundaries then the app developers wouldn't have had to care about supporting both in the first place.

According to you, they did design their API boundaries because they're "different APIs". As a byproduct, they must've spent more than two-second designing it to accomplish that, yeah?

The mental gymnastics you're performing must be tiring. I feel for you, I really do.

I do have to concede that you are correct in one point: The whole sub-discussion is irrelevant, namely because your opinion is obviously in the minority and you can't maintain a static viewpoint on anything so far - except to say "bankid sucks"; which isn't substantive.

If you ever change your mind about fixing your problem, I believe this might help in that endeavour:

https://www.bankid.com/assets/bankid/rp/bankid-relying-party...


Nordea requires Mobile BankID to log into their online banking, but they also let you log in with a card reader and do the challenge/response codes thing. I assume desktops/laptops aren't secure enough for their taste.


Fair enough. =]

Handelsbanken allows both. I think the mobile app is pretty static, however, after you sign in with Mobile Bank Id. I still wouldn't call that a "disallow", more of an intentional UI convenience for users that want #easyMode.


Fun-filled fact: It's also in Norway[0], now, as well.

[0] - https://www.bankid.no/bedrift/


Shameless plug: https://www.yoti.com/

We're trying to do exactly that.


> Mr Pavur says he believes he did not break the law himself while conducting the trial

This is a bit odd. I know his partner consented to this, but this doesn't seem like it should be enough to make this not identity fraud.

Obviously the research Pavur carried out is extremely valuable and the mid-sized companies failing to follow proper procedure are the real problem here, but it still seems like it would be technically illegal.


Regardless it would then be a bad law since almost all criminal law looks at intent and reasonable expectations of how a citizen should act.

We don’t need to keep replaying the vilification of security researcher game just because it involves the flawed gov systems imposed on technology itself instead of just technology. The end goal is the same, the privacy and security of end users.


Agreed. I hope any jury would nullify such a law. AKA "perverse verdict" in the UK?


I’d never trust a jury for any highly technical matter such as this. Prosecutors have shown they can be highly effectively at spinning even the most basic security research into sounding like serious criminal behaviour (like punishing someone with serious jail time for incrementing the id number in a URL and finding other users personal profiles completely unprotected).

You only gamble on juries for stuff like murder trials and similar basic crimes.


Juries can't nullifiy laws. They can nullify verdicts.


Good point.

>A jury verdict that is contrary to the letter of the law pertains only to the particular case before it. However, if a pattern of acquittals develops in response to repeated attempts to prosecute a particular offence, this can have the de facto effect of invalidating the law.

-Wikipedia


> I know his partner consented to this, but this doesn't seem like it should be enough to make this not identity fraud.

Most crimes require an intent to commit the crime, with notable exceptions (possession).

His intent was not fraud, it was security research. As demonstrated by getting the permission of the potential victim, and carefully avoiding things like forgery.


Fraud (in the UK at least) requires an intention to gain for yourself, so he would probably be alright.


A lot of the risk of this kind of thing could be greatly reduced if the law were changed so that for data that you should already have about yourself the company only has to tell you whether they have that data.

For example, I know my birth date. A company that also has my birth date should be able to just tell me that they have my birth date. They should not have to tell me the actual date.

Most of the data mentioned in the article is like this: credit card information, login and password information, social security number, stays in hotels, train journeys, high school grades, and maiden name.

Companies like credit reporting agencies that keep such data and share it with others would need to be an exception, so that you could check that they aren't giving out incorrect information about you.


"Generally if it was an extremely large company - especially tech ones - they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

This sums up regulatory compliance across the world quite well.


I think this is part of the reason why GDPR needs an exemption for businesses that are too small. This kind of a flaw will be abused more and more and they'll never be able to close this gap with small and medium businesses.


1. Such an exception would be used by large companies to evade the law.

2. Small and medium companies are also committing privacy abuses.


1. You can make late companies unable to evade the law.

2. Yeah? But according to this article GDPR actually makes small and medium companies even more dangerous to trust with your information.


I can’t believe a drivers license scan is all that is needed for many companies. That means that losing my wallet on the street effectively means that someone can go get my entire digital history.

Why not require the user to request this data while signed in to the service?


This is a great thing that the hacker noticed. It sounds like we really need more OpenID style auth and a lot more limited-lease access to personal data, and more tools to store our own data.


> "But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

I wonder if there is a market for selling GDPR compliance / advising services. Some company that makes sure your doing everything right, and inspects the requests for validity.


There are quite a few firms offering compliance / advisory services, including ones that'll host the data request forms etc. for you.

I'm not aware of any willing to take on the liability of verifying identities as part of that, though.


WhireWheel, Onetrust, and a slew of others (https://iapp.org/resources/privacy-industry-index-pii-vendor...) offer a DSAR module/platform to handle this.

they definitely won't be held liable per their contract nor handle the verification for you, but they do provide a system (a ticketing systems really) with a set of workflows that help put an identity verification system in place. It's nothing magic, but it definitely helps the privacy lawyers and DPOs who will end up bothering you with verifying all the data and providing it if needed.


Do you know of any companies that handle the whole process, similar to how stripe handles the whole purchasing process?


Can you elaborate by what do you mean with "the whole process"? If you have some internal processes that handle the data, then you can't really separate and outsource the "GDPR part" without outsourcing the whole business process that handles the data - e.g. if you ship goods, then handling of adresses can't (IMHO) be separated from the shipping, if you run a website, then the handling of all the related privacy issues can't be separated from running the website.

Furthermore, even if you outsource the whole business process that handles sensitive data, you're still the 'controller' of the data, and you still carry full responsibility for it - you can and should have legal arrangements with the processor to recoup your damages/costs if they screw up, but you're the party that's directly liable, and if the processor can't/won't pay, that's solely your problem.

You can outsource certain parts of GDPR compliance - such as handling the paperwork and managing the customer enquiries properly, but it's hard(impossible?) to separate "the whole process" from the rest of your business.


The general idea in my head is that instead of each company needing a department to handle GDPR, they outsource the department to the third party. Because of the company's size, I would assume the third party could handle more then one company at a time, lowering costs. Yes this wouldn't solve liability, but it reduces the chance there will be mistakes, like those mentioned in the article.


Yes, that's an option, we have local companies that handle private data protection issues for other companies, that was a thing already pre-GDPR with the earlier data protection legislation but it's now a larger business as the scope has increased.

What they do is somewhat similar to consulting and audit companies - they'll go over your internal processes and/or suggest standard procedures if you don't have any; they'll generally consult with the local data protection authority on particular interpretations and apply them to all their customers, etc. Creating/adapting a procedure for answering customer requests for their data (including the identity verification) would be part of the service; another common service is doing GDPR-policy training for e.g. call center employees. So a thing like that already exists; they won't take over your liability or your processes but they'll review your processes and hand-hold you through any adjustments needed.


I see, thanks for the info!


I'm sure one of the major consulting shops like Accenture would be willing to do it (poorly, and for insane amounts of money, of course). I'm not aware of any SaaS style offerings, though.


replyed to the wrong comment, see parent comment for my reply :)


Hundreds of experts. And yet none of them know what they re doing.


I think that's exactly the problem -- small/medium companies can't really afford to hire someone to do GDPR compliance (or at least, do it well). It's textbook regulatory capture.


I'm glad we can use Europe for an experimental staging area!


There definitely is. The problem right is now is that a huge number of them are prohibitively expensive. I paid $2,000 for GDPR advising for a $7k/mo MRR app and was quoted $25,000 to advise/assist through the entire implementation (of which I'd still have to do myself). It sounded like it's also necessary to hire someone to receive data requests, which would probably cost more (but seems super possible to provide as a service to many companies).


Aren't companies under the GDPR obliged to report [1] if the got hacked/a data breach? Sounds to me as if quite a few just got hacked via social engineering...

[1] https://blog.netwrix.com/2018/04/19/gdpr-rules-of-data-breac...


“Bad implementation of GDPRs information rights” would be the more correct but less clickbaity headline IMHO.

The funny twist being that these bad implementations are a GDPR violation too and can be punishable under GDPR.


Even if done correctly, they are just verifying that you bothered to get a photoshopped passport with the targets name in it.


Basically every country needs to run their own PKI for its citizens to even have any hope to legally prove they are who they are. I've so far seen a single well-working solution to the authentication and authorization problem and that is the Estonian ID-card PKI, others are trying to implement it (Finland, Latvia, actually even the EU) but they're basically nearly two decades behind.


The funny twist is that it seems pretty hard/even impossible to handle GDPR laws and could be punishable under GDPR to have enough verification.

All you know about a user is its name. How can you verify someone identity while still not being "overly burdensome" (which is required by GDPR)?


It appears to me that you can’t opt out of the right to access or right of deletion. Not even temporarily.


Yes. This just shows the problem that was already there. The GDPR is fundamentally a thing about having the responsibility to handle people's data properly The companies that failed, obviously cannot handle the responsibility of managing personal data without it getting into the hands of the wrong people and they shouldn't have had it in the first place.

It's much harder to test but this is a good indicator that they probably also don't have controls to manage employee access.


Can anyone here who's experienced with GDPR's provisions talk about the extent any given company is allowed to go to verify the identity of someone requesting information?

I'd hate it if someone pulls a "I'd like to be forgotten" request on me without said organization absolutely verifying my identity. If would seem that if they complied, they wouldn't be able to undo the operation...


GDPR is not a perfect law, but I think it's a great leverage to use when negotiating against US companies if you're a EU country.

I mean at some point, EU countries will get tired of the US going a little too far. It's about being able to negotiate, and with the digital age and the massive ad market, it only makes sense for the EU to protect itself after brexit and the cambridge analytica stories.


The main benefit of GDPR is that it forced companies to audit their policies and procedures to have slightly more hygienic data practices.

The downside is that it made social engineering easier. What this guy did was nothing new. But now those businesses were compelled to help him instead of ignore him until he pestered them enough.

Good job government regulators! You just made the most common fraud even easier!


This is the problem with overly aggressive legislation. The big companies performed well, the small companies ignored the law, and the medium sized companies tried to comply and failed. You can’t legislate good behavior because people will always find a way around the laws. The legal philosophy behind GDPR seems to be nothing more than to make everyone a criminal and then choose who to prosecute, and in that case why have laws at all rather than letting law enforcement punish whoever they choose?

At the end of the day this incentivizes big companies to comply with the laws because they can afford the necessary legal teams and the laws provide a moat to their entrenched power. It incentivizes small companies to ignore the laws just like the move fast and break things mentality of Silicon Valley. The people it hurts are the medium sized, growing companies who are best positioned to fight the big monopolies but are now being held back by well meaning but over burdensome legislation.

The good side of GDPR is that it’s trying to advocate for consumer privacy, which overall is a good thing. The issue is that it’s not a legal problem. In the same way you can’t declare drugs to be illegal and expect the supply of drugs to instantly disappear, you can’t declare misuse of data illegal and expect the same. As long as the data has value there will be a market for it, so why not draft sensible regulations instead of trying to solve the problem with laws.


I wouldn't agree that the medium sized companies tried to comply with the GDPR and failed. Yes, they tried to comply with that particular request but the failures suggest that they didn't even try to be GDPR compliant in the first place - if they had done so, then they would have had assigned a data protection officer who would have long ago asked themselves the question "what do we do in case of a personal information request?", and written down a reasonable process for handling such requests, possibly consulting with the local data protection agency.

That would count as "trying", as it was their duty to have done this a year and a half ago. It's basic 'table stakes', a precondition to being permitted to handle personal data at all. If they started thinking about "how do we verify identities" only on day they received the request from this researcher, then that's not trying to comply, that's being grossly negligent.


I keep hearing on Hacker News how easy it is to be GDPR compliant and that any company following good privacy practices should have no problems.

Then I see stuff like:

>if they had done so, then they would have had assigned a data protection officer who would have long ago asked themselves the question "what do we do in case of a personal information request?"

If I have to hire/create an entirely new position at my company these laws are not straightforward or common sense.


In most places it's not a full time position but simply a designation on who's the responsible person.

However, the reason why it's not talked much in context with GDPR compliance is that's not really a new GDPR requirement - it has been a mandatory requirement already in the previous privacy laws; any EU company handling private data had to have a designated DPO for many years before GDPR and that's a nonnegotiable basic requirement if you want to handle such data. If a company doesn't have this, then they weren't permitted to handle private data even before GDPR was concieved. GDPR added some more rights to consumers (such as this right to request) which would require the existing DPO's to adjust procedures.

I mean, does it seem likely to you that your company can implement proper, secure handling of private data without having someone responsible for it? This very discussion shows that it takes some attention and specialized knowledge.

And if you can't pay the 'table stakes', then you're not allowed to 'play the game' - this has many parallels with other regulations. Just as it's reasonable to shut down restaurants for gross hygiene violations and prevent them from operating until/unless they can't handle food properly, it's reasonable to shut down data processing activities for gross 'data hygiene' violations, and prevent them from operating until/unless they can handle private data properly. Just as you can't do various construction and industrial activities without having a designated occupational hazard person (not necessarily full-time), you can't handle private data processing without a designated data protection person. There's nothing novel here.


I’m very skeptical of the idea that, if they didn’t account for this particular attack vector, that must mean nobody thought about GDPR at all. If someone came to me to write a DSR plan, I’d be thinking about how to reliably enumerate all the places we might have personal data, not how to verify that people aren’t impostors. (Does your DSR plan also have to prevent spearphishing your DBAs?)


It's not about an attack vector (which may be uncommon or unexpected) but about a basic process for handling information requests. If you're a data controller, there are a few duties you must satisfy, and handling these requests is a mandatory part.

If you're not prepared (whatever that means in your organization) to receive and answer information requests from customers, then you're not prepared to meed GDPR requirements.

If a company had a reasonable process for identity verification in place, and that process was circumvented by an attacker, then I (and most likely the regulator) would consider that as trying and failing, which is generally not punishable but mandates improvements. However, if the company didn't have any process in place (which seems to be the case in many of these examples) and "just happened" to fail, then I (and, again, most likely the regulator) would consider that as negligence, because they had an explicit duty to "use all reasonable measures to verify the identity of a data subject who requests access, in particular in the context of online services and online identifiers" and did not. The question essentially comes down to "were the measure they used reasonable?"; if you spend a little time thinking about it beforehand you generally get to something reasonable, but if a random employee tries to wing it when the first request comes, then it's plausible that the result will not be reasonable.

And, regarding "Does your DSR plan also have to prevent spearphishing your DBAs?" the answer is not clearly negative - GDPR does require you to take reasonable means to ensure data security, and that could involve taking some steps to both reduce the risk of spearphishing DBAs and steps to ensure that DBAs don't get unlimited unlogged unsupervised access to private data; in any case if a breach occurs by spearphishing your DBAs, you'd need to demonstrate to the regulator that you did take reasonable measures and this wasn't because of pure negligence.


this is peanuts. one eu country government decided it needed approval under gdpr from hospital patients before treatment/surgery/etc. this denied service to a bunch of (tech) illiterate people, bringing the budget back in line.

another one: due to gdpr, violent thugs hired by the police to hurt protesters cannot be named. it would apparently trample on their rights.

gdpr is, as forecasted, a complete mess. a mess that is exploited by corrupt eu governments to great effect.


who could have thought


And Europeans wonder why Americans hate the GDPR laws. Overreaching regulation will be implemented in a least-effort method.


When I first heard about the GDPR I imagined that this kind of thing might happen.

https://devrant.com/rants/1400805/you-know-gdpr-compliance-i...




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: