One nice example of this which I once saw in a real system is that a system that collects credit card numbers from the public might not need to be able to read those numbers after they're recorded, if the charges are going to be made by a later batch process. So the credit card number receiving system can use a public key to encrypt the incoming data in such a way that the batch processor will later be able to decrypt it. In this case, there is certainly a machine that has the capacity to read the data, but it's not the same machine that stores the data or that is exposed to connections from the general public. (But that paradigm is a lot easier for the case of offline batch processing, which is mostly different from the applications that you were describing... but wherever a system has a component like this, it may be possible to reduce the window of exposure with this pattern.)
Another example is that a database server might be compromised via a different vector from its clients. If some database fields were encrypted, the impact of a database-only compromise would be reduced.
Edit: elsewhere in this thread someone points out that your argument may be meant to be more specific to this breach, and I think it's most likely right with respect to this particular incident.
All the data is hot data. That's what Equifax exists at all. There are multiple internal services which can cause "pull that entire file." If you lose any of them, your data store doesn't matter; there goes the ball game.
Someone's about to say "But rate limiting!" and not understand that Bank of America can ask for 10 million files at a time. (One cron job talking to a another cron job, etc etc, will fetch an update to a substantial fraction of all of their customers to update their internal records for underwriting. A larger recurring use case is "Refresh the soft pull on everyone in our marketing funnel whose pull is stale." Note that that is a sizable portion of the adult population.)
One of the more interesting bits of the project is that soft-real-time is not a requirement, so some simpler, slower and older algorithms become feasible (interactive ZKPs, even fully-HE systems perhaps). A very specific use case has allowed for the possibility. But it’s amazing to work on :)
It's possible that you've protected the data at rest, but if some 'get' operation is just going to serve it up, what is the effective difference?
An attacker is forced to either exfiltrate the data slowly (so the surge isn't noticed) or alert ops monitoring that the decryption service is unusually active.
Defense in depth raises the difficulty in going unnoticed and raises the time it takes to succeed.
Edit: the key comes from the user, its not stored anywhere on a server. The key can optionally never leave the user's device.
Does it mean if user loses his key - he wont be able to get access to his data?
This presupposes you know what you're doing, security-wise, on your network, and with your backups, and with temporary data on servers. All of your interior traffic is encrypted, right? Right.
It doesn't sound like Equifax was even on the map.
It's small, but it is something.
The other reason to do it, frankly, is because when you end up in the news, this is the first thing everyone asks. It's easier to say yes than to "no, but..." and explain why this is a dumb question to ask (as you are trying to do here, with a far more technical audience).
I think you mean "encryption wouldn't have mattered for the known attack", but there are lots of cases where encryption would help... even in cases that don't involve hacking, like hard-drive disposal.
If you're a defender, I would go further and say you must assume it.
To get a dump you'd need to compromise two systems - a database server that has access to all the data, but in the data it stores and returns all the interesting columns are encrypted; and an app server that can decrypt data that it gets (and it has the keys only for the columns that this app server actually needs to use, if you have different classes of confidential data) but can't get all the data from the database freely, only a predefined subset in a limited and logged manner controlled by the database system.
Granted, if the attackers can get RCE on one of those systems then it's likely that with some effort they can get the other system as well in a similar manner, but still it's an useful defense in depth.
See the example vulnerable web service here: https://gist.github.com/emidln/a43b9fee4fc55273106c4b850f6b4...
Is the assumption that control over the db can pivot back to the server or that if there are basic issues with db access that there must be other exploitable issues (probably a fair assumption)?
> If you're a defender, I would go further and say you must assume it.
The assumptions you make when reacting to a known compromise are wildly different than what access an attacker actually has and what they might be able to do.
In general I am not sure if we wish to conflate systems security and cryptographic security - cryptographic security ideally should guard against system security failures. Although in practice I grant you that broad system failures which expose crypto secrets (code execution would fall into that) would lead to crypto failures as well.
Just a couple days ago, I heard a mathematician call a certain complicated knot invariant "space math" because of how out there the approach was, but that was idiosyncratic.
For all the horrible stuff ups Equifax management has had, it sounds like congress would be substantively happier with them if they said "yes our SAN has an FDE feature". The media would certainly be kinder to them.
I'm in two minds about rolling something objectively pointless just so I can address these sorts of concerns.
This is a problem even with HSM designs; in real-world systems, HSMs usually just present an oracle interface to data. The hope is that with sufficient monitoring, you can use the oracle to buy time for defenders to detect a compromise before all your data is exfiltrated.
I think most penetration testers have the experience of owning up a server and going on the brief treasure hunt for the static key stored somewhere on the filesystem.
Among probable hits, "hierarchical storage management" and "hardware security module".
Is it because the keys are easily comprimisable, or stolen?
TL;DR: If a machine can actually use the data in the database, it has to be able to get the data in a decrypted form. If an attacker compromises that machine, the attacker can also get the data in decrypted form.
The compromised component at Equifax was the running application which is built to retrieve data from the database - it knows how to log in to the database and do whatever decryption is required to retrieve data. You don't particularly need to compromise passwords or encryption keys or whatever, because the application will just use them for you.
Encryption protects against compromising the server itself either physically or at the OS level, which is not nothing, but also doesn't help if you have an underpaid janitor with a gambling problem with all the keys in his pocket guarding the loading bays out back.
So why Fortune 500 companies only? I am just trying to understand what this has to do with the scale of a company.
Elsewhere in this thread you said that a compromise of a database is likely associated with a compromise of systems that access that database, but this is not necessary true if the platforms and administration of these systems are very heterogeneous (say, different operating systems), and in any case it doesn't make it not an application of defense in depth. (Suppose that 90% of attacks that penetrate the database also penetrate the client; now this use of encryption has managed to beneficially mitigate 10% of these attacks.)
It helps here to understand two things:
1. When we talk about protecting servers with encryption, we're stipulating attackers with RCE. That was the case with the Equifax attacker and is routinely the case in real-world applications.
2. SQLI, the app-layer non-RCE attack that gives you direct access to raw database rows (ie: the one non-RCE case in which you'd get access to the data we're encrypting) almost always equates to RCE anyways.
There are real designs that involve cryptography that mitigate these problems. They fall into two buckets:
1. HSM/Crypto "oracle" schemes that are backed by intensive staffing and monitoring. Yes: nobody disagrees that Equifax should have had a serious security team doing serious monitoring. But: if it had that, it wouldn't have had this breach, which involved a published critical framework vulnerability. In other words: in bucket 1, "encryption" isn't helping you, "better security team" is.
2. Moon math schemes. I'm all for moon math, but when people say "phwat!? Equifax worn't even encrypticated?", they do not as a rule have a particular moon math scheme in mind (or, likely, an appreciation for what it means to be the first at-scale deployment of a new moon math scheme).
I am not mounting an argument that people shouldn't encrypt things. Hard drives do get lost. I'm just saying that as an operational security mitigation, encryption isn't doing what 99.99% of developers think it's doing in this scenario.
(by the way, I can't take credit for the term "moon math" but it is super useful)
...Is what I can only imagine is being said when the cameras are off.
 Yes, "no protection" is also a way of storing data.
Encryption at network level is a must. Corporate routers/firewalls have been very vulnerable before and the risk of grabbing everything is a lot easier if you've comprised the network.
Encryption at rest is a must, as at some point you need to replace those disks and it's a lot easier if you can be cavalier with the handling afterwards because you know it is unreadable.
Encryption at application level (object encryption and between services) is a must. Which means if a service is hacked or you dump the dB you may not be able to read any of it or only those records accessed whilst the hack happens.
You replicate access control patterns, like in a secure building... These may come down to one or more common denominators (can you trust the security receptionist), but better that than the whole chain is vulnerable... You then only have one set of alarms, logs, metrics, etc to keep an eye on and to test very thoroughly.
In the physical world: for security scenarios we have very strict procedures with locks, boxes, safes, multiple security door/gate entry systems, multiple participants and signatures involved in every action, etc to mitigate internal and external error, failure or attack - all of these can have an electronic information system equivalent and we should start designing security in web systems with these ideas in mind when it as significant as Equifax.
On a serious note, we really need to make encryption a part of high school mathematics. What teenager doesn't want to write secret messages?
When I took an intro to security course in college we spent a couple of classes building a very elementary understanding of how encryption works with plenty of hands on examples (using laughably insecure algorithms, but still enough to get the points across). I think most students found it the most interesting part of the course since most everything else was more about security policy (a MBA could've probably easily taken the course successfully).
SO taught HS freshmen in physics (close, but still). I'd say we need to make math a part of HS mathematics. ~60% of the kids can't do algebra in any way. Really. Trying to make encryption a part of it is essentially useless. I hear from time to time that a 'basic-adulting' course would be great to have had. HA! You think mortgage interest rates and basic car maintenance would be learned? Most HS students in the US can barely keep from snaping their genitals at each other during class. Find me a cell-phone jammer that the FCC will approve of for under $200 and EVERY teacher in the US will buy five that very same day. You'd make billions.
Faraday Cages may not be a bad idea though. Copper is fairly cheap. It's making new windows and certifying that the door to the classroom is closed and that no signals can get in. Heck, with the way battery life is going, maybe just take away electrical outlets and power-strips. Only 1st period would be effected with a bit of kids after lunch.
They could reveal tomorrow that their data center fire protection protocols mandate the use of printed backups, feeding them to the flames with hopes the god of data destruction would be appeased and leave their servers alone. I would not be surprised. Nor would I be surprised if the paper backups were only available as printouts on toilet paper, 1000 miles away, in the CEO's office.
No, my reaction would be, "sounds about right for them, though I guess it's +1 point for effort on keeping any backups at all"
FDE protects against…
• … theft of the media.
• That’s it.
• That is about 0.00000002% of the actual
intrusions that you have to worry about.
• Easy rule: If psql can read it in cleartext, it’s
• (It’s a great idea for laptops, of course.)
And then it recommends: "Always encrypt specific columns, not entire
database or disk"
However encrypt your backups.
I think it is fairly sensible.
 Securing PostgreSQL [PDF], Page 31 : http://thebuild.com/presentations/pgconfeu-2016-securing-pos...
Equifax themselves are not a financial institution, but as a vendor of one, would it not apply to them too?
(Also not a lawyer)
That said, I find these credit agencies to be absurd, and need to either disappear or 100% change their business.
ok, argue this.
Normalizing the corporate death penalty starts the ball rolling towards this end.
Do I need to contact all of my line item creditors and ask them to remove references to Equifax?
Basically, the overall idea is good (data about you should be owned by you) but mapping that into actual nuts & bolts implementation details is a huge pain in the ass.
That's ... just wrong. The email address itself, unless it happens to be a role address, is PII. Whether there is a person's name in there doesn't matter.
> IP addresses can, in many cases, also be considered PII.
Really, no, the address isn't PII, the information about someone's behaviour is. You may store IP addresses as much as you like. You just may not store it as a key that could be used to link information about someone to other information about them.
> Basically, the overall idea is good (data about you should be owned by you) but mapping that into actual nuts & bolts implementation details is a huge pain in the ass.
I wouldn't say it's always trivial, but it's not really that hard either. The only thing that is really hard is collecting data that you have no justification to collect. If you simply avoid collecting data, none of the other stuff affects you.
Unfortunately, since our modern digital world works with data, that particular tautology is next to useless.
With my small business hat on, I'm deeply concerned about the real world implications of the GDPR next year, even as with my privacy advocate hat on I'm happy that the kinds of big data-hoarding companies that cause most of the real problems are going to face more meaningful regulation.
The lawyers would disagree with you. If the email address contains anything that identifies you personally, it's PII and thus falls under GDPR. Therefore, if you collect email addresses you have to go through great pains to protect them.
As for the rest of your stuff, while you are technically correct your comment is pretty useless. We live in a data driven world that requires.... well... collecting data. Saying "just dont collect email" isn't helpful and collecting IP address info without associating it with use data is pretty pointless.
It is a huge, expensive undertaking to make any kind of "legacy" pre-GDPR infrastructure GDPR compliant.
> If you simply avoid collecting data, none of the other stuff affects you.
If you are doing anything useful on the internet, you are collecting data.
You are simply missing the point. It's not required to "contain" anything. If it is the email address of a person, then that makes it PII.
> Therefore, if you collect email addresses you have to go through great pains to protect them.
Well, yes, of course you do? If you want to profit from using other people's addresses, you better make sure they don't get harmed by it.
> We live in a data driven world that requires.... well... collecting data.
Erm ... no? We don't live in a data driven world, we live in a world where some people are completely unwilling to respect the boundaries of other people and consider it their right to do whatever it takes to manipulate them in their interest. There is no such right, there is nothing that requires that you violate other people.
> Saying "just dont collect email" isn't helpful
Well, whether it is helpful depends on what you expect to be helped with. But it is a perfectly viable thing to do. If people voluntarily hand you their email address so you can send them invoices, say, there is absolutely no problem with you doing so, because that obviously means that there is consent. Anything beyond that, and you are just selfishly trying to ignore the interests of other people, and it is perfectly viable to just not do that.
> collecting IP address info without associating it with use data is pretty pointless.
... so don't do it, then?
> It is a huge, expensive undertaking to make any kind of "legacy" pre-GDPR infrastructure GDPR compliant.
Erm, well, it's an expensive undertaking to stop being an asshole ... so what?
> If you are doing anything useful on the internet, you are collecting data.
So, if I publish a free software project on my own web server that doesn't write any log files ... that's not useful? Could you explain how exactly that reasoning goes?
If you don't need that email address for anything, then don't collect it. If you do (perhaps because the customer has agreed to let you send them email), then fine, collect it, but also provide a way to delete it, permanently, if the customer wants to terminate their relationship with you.
Yes, it is. But worthwhile things are often not trivial.
Indeed. But there is not trivial, and then there is prohibitively difficult and expensive to the point of being unreasonable.
Obvious examples, exhibit A: Suppose you use a deduplicating backup system, and being a responsible service you also ensure that all user data is properly encrypted as part of your backup process. If each single customer who decides they don't want to deal with you any more can require you to remove any reference to them from anything you currently store or have ever stored, please describe a technically and commercially viable strategy for reliably cleansing your backups.
We have swung way too far into a regime where personal information is irresponsibly slung around the internet with no accountability and no consequences when that data is misused.
> If each single customer who decides they don't want to deal with you any more can require you to remove any reference to them from anything you currently store or have ever stored, please describe a technically and commercially viable strategy for reliably cleansing your backups.
That's not my job to do. That's "your" job as a responsible technologist to figure out, based on your knowledge of your systems and processes. I'm in the midst of preparing to be GDPR compliant at my employer, and I'm not saying it's easy, but all these problems are tractable. Would I rather be focused 100% on building new products and features? Sure, who wouldn't? But part of being a professional developer is taking responsibility for the software you write, which includes handling customer data with respect. GDPR is a step in the right direction for that.
I haven't built my business off your data or anyone else's, unless things like having your details to charge the agreed payments or keeping routine server logs counts. None of my businesses is in the data harvesting field, nor do any of them collect personal data that isn't legitimately relevant to what they do or share it with any third parties other than to help do whatever it is they do.
Not yours. Mine. I dictate how it is used and who has it.
Well, no, you don't. You might wish you did, but neither the law nor the facts are currently on your side. This will remain the case even under GDPR.
You keeping my data can actually be a threat to my security and livelihood.
Your paranoia may be a greater threat to your security and livelihood. There is very little that we do that could pose any significant threat to anyone even if our systems were compromised. How do things like keeping records of how people are using our own services and resources to help with our own planning and protection against abuse pose any threat to you whatsoever?
That's "your" job as a responsible technologist to figure out, based on your knowledge of your systems and processes.
And I suggest that for many businesses, no solution will exist that is responsible in terms of good practices like keeping proper records and back-ups, compliant with the letter of the law under the GDPR, and commercially viable in the face of people actually exercising their full rights under that law.
But part of being a professional developer is taking responsibility for the software you write, which includes handling customer data with respect.
I am a staunch advocate of privacy and protecting the rights of the little guy. I do treat all personal data with respect, and we go to considerable lengths in my businesses to ensure that such data is not collected or shared unnecessarily, is stored and processed according to good practices, and so on. We have always done so, from day one, even with only limited resources and sometimes at the expense of doing things that would no doubt have made us more money but crossed into territory we weren't happy being in.
My objection to the GDPR is precisely that despite doing reasonable things to run our businesses and doing nothing that most people would consider even remotely controversial or unreasonable, we are still subject to the kinds of excessive and expensive measures we have been talking about. It's rather ironic that the obligation to notify third parties to which data has been disclosed comes with caveats about being impossible or involving disproportionate effort, yet the obligation to erase data held directly does not.
However, given that this is the case under the GDPR, it's hard to imagine how most businesses could withstand full enforcement of the right to erasure by customers wanting to make trouble without compromising their ability to operate viably and responsibly in other respects. That is not, IMHO, a good law, even if you're a privacy advocate. In fact, since it sets an almost impossibly high standard for compliance, it is arguably worse than a more moderate law, because businesses may decide that they're screwed in any case if someone wants to make trouble so they have little to lose by not doing their best in terms of data protection.
If you use an RNG to generate IP addresses, that does not represent any information about any person, hence no PII, even if it is in fact the IP address of a person that is protected under the relevant regulation.
"If a business collects and processes IP addresses, but has no legal means of linking those IP addresses to the identities of the relevant users, then those IP addresses are unlikely to be personal data. However, businesses should note that if they have sufficient information to link an IP address to a particular individual (e.g., through login details, cookies, or any other information or technology) then that IP address is personal data, and is subject to the full protections of EU data protection law."
Privacy is worth protecting. Plus, it's still a huge value add to your customers if you provide them these controls, whether you're legally required to or not.
Assuming that is what you meant, consider: Equifax's CEO owns all information about Equifax, looks at all this negative press recently and decides it should be scrubbed from the Internet / all publications / etc. If you meant it only to apply to people, lets play the Godwin's Law card and suggest that Hitler (or his descendants) wishes to scrub all information about the holocaust, etc.
I think that scenario is far, far worse.
The EU operates under similar provisions, which as an ex-Googler and tech entrepreneur I find pretty annoying, but as a person find pretty encouraging.
(There are issues even under this distinction that are problematic: if you commit a crime, does the public have a right to know? What if your crime puts them at risk? If you have a reputation for screwing your business partners over, should future business partners have a right to seek this out? But at the same time, there are huge negative externalities to not being able to control this information. If a company has false information on you - as happens pretty frequently - do they have a right to sell it to so many parties that correcting it becomes impossible?)
What if I lent you money and you never repaid me -- could you legally prevent me from telling anyone about it, by virtue of it being private information about you? If you think that's unreasonable, isn't that more or less the basis of your credit score?
A set of questions I've been thinking through are:
What is privacy?
Can it be quantified?
What is identity?
What are the reasons we want to check or confirm identity?
Can some of those reasons be eliminated?
It seems to me that privacy is the ability to define and defend boundaries of information and disclosure. That the question of what is or isn't known, and to who's benefit that information is used (and in particular, to the benefit of the subject of the information, of society as a whole, or some specific third party or parties), matter. That there is a continuum of interests in protecting and revealing relevant information, much of which has to do with the relationship of the subject to society, and that those with great power, or a history of abusing society's trust, have a far smaller claim to relevant privacy. (Balanced, perhaps, with potential consequent risks: the wealthy in parts of the world are subject to stalkers, frauds, kidnapping, and extortion threats, for example.)
Persons in positions of high power, or convicted of crimes, or who've violated public trust, should be faced with greater disclosures. That would include, on at least two counts, Equifax's former CEO.
Much of the reason for seeking information is to establish trust or credit. Should I trust what you say, the capabilities you claim to have, that you own or control or have created specific properties, resources, or communications? Are you the specific subject of a given medical history? Do you owe, or can you claim, a tax debt or a pension credit?
Also part of this question is what risks (or opportunities) disclosure of specific information portends. Do the specifics of a romantic relationship you had 30 years ago matter? Does the fact that you are, or aren't, a politician, a military officer, or gay, matter? Or that the romantic partner was or was not of age of consent? Or that you were or weren't?
When is knowledge power or control? When is it liberating? For how long should it be controlled?
Do rights expire after a fixed set of time? On death? Or are they carried forward according to, say, tribal customs or beliefs? For how long?
If we say "nobody is allowed to keep information about other people ever," that's clearly a huge reduction in freedom. Americans generally like freedom even if it brings some negative stuff with it. (Think The KKK - they would be flat out illegal in some countries but in America they are protected under the law)
Some people may say it's morally repugnant to restrict private entities from writing down information they happen to know about other people. If I write in my diary "HN user nostrademons said XXX" should HN user nostrademons own that information just because my diary might be stolen?
Right now how we balance it is companies are allowed to collect this information all they want and look at it themselves all they want. BUT it can only be accessed by a third party with your permission (you give a bank permission when you open an account, your landlord when they run a credit check, your insurance company when you get a quote, etc.) Creditors may send targeted offers to you based on your credit file, but you may opt out (or in) at any time. (you can do it here: https://www.optoutprescreen.com/). You can access your own file for free yearly as well as whenever you were denied something as a result of what's on your file(s). You have the right to dispute the information in your file if it's inaccurate. Creditors are required to disclose to you certain information that they used to make their decisions. You have the right to freeze your reports so nobody can access them.
IMO it's only a problem if I did not give away that information. Thing is, we give away a lot of data about ourselves.
Sure, and why is it so ridiculous to expect that I might change my mind about certain types of information, when given to a corporation, and want them to delete that information? Why shouldn't I have the right to, say, tell a company with sensitive financial data about me to delete it and terminate my relationship with them? If I can't do that, then I'm at their mercy not to sell that data, be acquired and have the data used in new ways that I did not authorize by the new owner, or be compromised and have that data in the hands of parties that would misuse it.
We're not talking about censoring obviously public information, here, or even allowing people to hide when they've done something newsworthy. We're talking about controlling the flow of, and access to, private, personally-identifying information.
It's not ridiculous, but it does impose a cost on them, and by extension everyone else dealing with them, because you changed your mind. Whether you should automatically be entitled to impose that cost on everyone else and under what circumstances is not an easy question.
Why shouldn't I have the right to, say, tell a company with sensitive financial data about me to delete it and terminate my relationship with them?
Maybe they need that data for their own financial records. Maybe those records are things they are required by law to keep.
Maybe they use that data to protect themselves against fraud or other abuses, and only use it in reasonable ways for those purposes, and allowing fraudsters to require them to delete all traces of their previous interactions would leave them unreasonably vulnerable.
If I can't do that, then I'm at their mercy not to sell that data, be acquired and have the data used in new ways that I did not authorize by the new owner, or be compromised and have that data in the hands of parties that would misuse it.
That's a false dichotomy. Practical data protection is almost always going to be about restricting not just the initial collection of data but also how that data may be used and by whom once it has been collected. An isolationist approach where everything can be kept totally secret is impractical, but it's usually not what we really want anyway, since then you couldn't do anything useful and intentional with the data either. It's more useful to ensure that people who have some data about us for legitimate purposes do not to then repurpose that data or share it with others for less legitimate purposes or without sufficient transparency and additional consent as appropriate.
Sure, and that's why we have initiatives like the GDPR that try to answer that question. It's not going to be perfect, but throwing up our hands and giving up isn't an answer either.
> Maybe they need that data for their own financial records. Maybe those records are things they are required by law to keep.
Needing that information to be personally-identifiable is pretty rare, and cases where that information needs to be retained for a long time are even rarer. But in cases where it's necessary, sure, of course, go for it. The point is that the data needs to be collected and kept for a legit business purpose.
> That's a false dichotomy.
No, it's not.
> Practical data protection is almost always going to be about restricting not just the initial collection of data but also how that data may be used and by whom once it has been collected.
That's been shown not to work all that well. Companies lose control of data all the time, whether due to a data breach, or due to unscrupulous practices internally that takes existing data to use in new ways, even if not specifically authorized.
> An isolationist approach where everything can be kept totally secret is impractical, but it's usually not what we really want anyway, since then you couldn't do anything useful and intentional with the data either.
Sure, and nowhere did I suggest that's what I wanted. Please stop putting words in my mouth. I'm totally fine giving out "secret" data if there's a benefit to my doing so. But if there is no benefit to me, then companies should not be entitled to my data.
I would argue that the approach taken by the GDPR is pretty close to just throwing up our hands and giving up, just coming down on the other extreme.
Needing that information to be personally-identifiable is pretty rare, and cases where that information needs to be retained for a long time are even rarer.
Not at all. Just look at the records you are required to keep under EU VAT rules. One of the big criticisms since the changes in 2015 has been that they require an already demanding standard of evidence to be kept for the location of every single customer you sell to (if you're selling something within the scope of the rule change, obviously), you're required to keep that information for years, and your records are subject to audit by any of 28 different national tax authorities.
But in cases where it's necessary, sure, of course, go for it. The point is that the data needs to be collected and kept for a legit business purpose.
And what happens when that data includes, say, an IP address that was subject to geolocation when a customer was charged, thus linking that IP address and everything else in every log or database entry that you ever collected that mentions it with that specific customer? Since you may effectively be forced to keep the IP address associated with the customer to meet mandatory standards for tax record-keeping, must you now purge every related record or log line even from backups, on demand and entirely at your own expense?
What if those backups are stored, as many are, in an encrypted, deduplicating format? Are you now required to go through every backup you've ever taken in the history of your business and systematically obfuscate or delete every mention of that IP address? Do you have to take steps to erase any trace of it from the storage media involved, in case the media are lost and subject to recovery measures after an ordinary deletion? Do you realise how much time and money would be involved in doing that, every time a customer decided they didn't want you storing any personal data about them any more? It's totally impractical. There has to be some measure of being reasonable and proportionate in what is required.
That's been shown not to work all that well.
It doesn't work well when the regulations aren't enforced and there are limited meaningful penalties even for the sort of gross negligence that we've seen in cases like the Equifax leak. The idea that what Equifax did was compliant with the current rules is laughable, yet they've barely taken a slap on the wrist for it, despite both the degree of negligence that led to the breach and the scale and nature of the potential damage.
There's nothing inherently wrong with the principle, though. After all, there are many other things that we could do, but which are illegal and most of us don't, and we penalise those who break those laws. Why should this be any different?
I'm totally fine giving out "secret" data if there's a benefit to my doing so. But if there is no benefit to me, then companies should not be entitled to my data.
But that wasn't the scenario we were talking about. We were talking about a situation where someone legitimately had personal data about you, and you subsequently changed your mind and wanted them to delete that data. From the point of view of someone controlling and processing personal data in legitimate ways, giving you an absolute right to revoke that permission regardless of the practical consequences to anyone else involved is a totally different situation to giving you a right not to be involved in the first place.
I don't really have a problem if I end up in the background of someone's family photo and then they stick it in an album or on a hard disk somewhere. I'd have a pretty big problem if someone snapped a photo of me in a public place and then sold it to a white supremacist magazine as "the face of minorities taking over this country". We'd have an even bigger problem if there was somebody out there who had hacked every security camera in the world and was collecting images to train facial recognition for a fleet of killer drones who would eliminate all his adversaries.
These situations are not the same. Most people have no problem with the first. Most people would be pretty terrified of the last.
But I think I get the gist of the thing you're pointing at, but surely you can understand why are other people would be wary of deciding what exactly is problematic in this regard via legislation, police, courts, etc., particularly as they exist today.
Yes, the idea that you are entitled to edit other people's memories is as repugnant as it gets.
No. (EDIT: If someone has a better idea, please reply!) I filed a complaint with the CFPB with citations from their breach as well as congressional testimony requesting my credit file be removed. The response was boilerplate:
"Thank you for contacting Equifax. We remain focused on consumer protection and committed to providing outstanding service and support. Protecting the security of the information in our possession is a responsibility we take very seriously and we apologize for the concern and frustration this cybersecurity incident causes. We have developed a comprehensive portfolio of services to support all U.S. consumers. Please refer to our dedicated website, https://www.equifaxsecurity2017.com, for the latest information and updates or contact our dedicated call center at 866-447-7559. The call center was set up to assist consumers and is open every day (including weekends) from 7:00 a.m. – 1:00 a.m. Eastern Time."
> Do I need to contact all of my line item creditors and ask them to remove references to Equifax?
Even if you contact your creditors, Equifax is under no obligation to remove the data. Most credit lines have the possibility of falling off after 10 years (7 years for negative trade lines), but there is no obligation for them to be removed.
We're having to prep for that at my corp currently, and it's VERY explicit about being able to pull up and remove all personal data, with some very hefty fines if you don't.
EDIT: thought about this further and peeked at our guidelines, they may be able to get around this by the "data is integral to the function of the business" exemption, but I'd still wonder if someone could speak with authority on this.
That's probably more of an "integral to fulfilling its contractual obligations to those the data is about". It's more complicated than that, but the point is that you cannot simply declare it the purpose of your business to collect personal information and thus be exempt from data protrection regulation.
No. Pay cash or get tracked.
Furthermore, your case of "pay cash" pretty much excludes you from higher education and home ownership. Unless you are super wealthy.
How is this legal?
I forgot to mention the insurance companies have their own data brokers too. So you'll end up in there if you've ever had insurance.
Wow, it's impossible to not end up in any of these systems.
Paradoxically, if you were extremely wealthy you'd definitely want to purchase umbrella insurance to insure that wealth!!
I should mention that when I said insurance I meant car, homeowners, renters, and umbrella insurance, not medical insurance. I don't know about medical insurance and what sorts of data brokers they may use.
Banks use data brokers when you have a savings or checking account too.
Note that there are all sorts of unregulated ways in which companies (e.g. Facebook) use CRA data.
Got a bank account? You're in ChexSystem and/or Early Warning.
Ever had a car loan? Credit card? Student loan? Mortgage? You're in TransUnion, Equifax, and Experian, plus more.
Got a job at a large company? Your more likely than not to be in The Work Number.
Ever return an item to the store? You're probably in The Retail Equation.
Ever have car insurance, renter's insurance, and/or home owner's insurance? You're in LexusNexis.
It's virtually impossible, unless you live on a homestead completely off the grid, to avoid these data brokers knowing things about you.
Most people don't have a listing in any phone books these days yet you can type in a name and get credible hits from whitepages.com. Where do you think they get that data on names and where people live when you've never had a business relationship with them?
It sucks that the rich and wealthy can be as morally bankrupt as they want without any/many consequences.
Then there's this gem : "Barros also led the company’s U.S. Information Solutions (USIS) business, which includes U.S.-based services that provide businesses with consumer and commercial information and insights related to areas of risk management, identity and fraud, marketing and a variety of industry-specific solutions."
 - https://www.equifax.com/about-equifax/corporate-leadership/
I dont expect them to be able to debate cipher strengths or argue key length, but I do expect them to know about Machine-To-Machine communication, Data at rest, One-way encryption and what it takes to be HIPAA compliant etc, it's their job to know this, and it's NOT HARD. I makes me mad too.
Here's the thing: short of some California disclosure laws, it isn't clear that Equifax broke any laws, or even any contracts.
They don't have any particular duty to protect the data they have about you, other than protecting their own IP. Equifax is a clearing house for information that people say about you.
Look, if I follow you around, reporting all your movements, write it down, and then my notes about you get posted on the internet, what law did I break? (Equifax used to report "his marital troubles, jobs, school history, childhood, sex life, and political activities", so it isn't that far of a stretch).
Given that perspective, it makes sense that they would protect the data as much as any data vendor would. Certainly they didn't predict the PR backlash such a failure would cause and should have factored that in.
They certainly do in much of the civilised world. The US is almost as far behind the norm in the modern world when it comes to privacy safeguards as it is in its banking and financial infrastructure.
Source: u/neurotech1 https://news.ycombinator.com/item?id=15672691