I did this in 2009-10, it had been going on for a while, and lasted for a while but sadly I hear they've solved it seemingly by nightly batch job to remove your own credit pulls.
These companies are just barely functional for their purpose.
Experian seemed to have their act together a bit more.
"Look at how incompetent they are" is more like "Look at how our industry is." They're a representative sample.
The kids on flyertalk were all over this.
Whoever files a class action should make a motion such that anyone can purge their PII from a credit authority that's experienced a public hack such that their PII was exposed, or some other sort of incentive for these too-big-to-improve companies to do their job
My experience has been that the main product of most companies is management politics. Actually shipping product is nearly irrelevant to everyone's daily activities. In some cases, people get punished for being competent.
One company I worked with made it clear they had no interest in listening to competent people. People were promoted for their ability to suck up to management. They got promoted when the projects they managed were delayed, buggy, and generally non-functional. Any competent engineer was summarily drummed out of the company for causing trouble.
You can't claim negligence of following industry standard practices when there ARE no industry standard practices. The closest we have in the software field is the work done by NASA on creating legitimately safe code. But companies don't want to follow those sorts of guidelines because they make software development slow and expensive. Sure software development is the primary driver of their businesses existence no matter what industry they are in, but they feel entitled to it being cheap and fast.
PCI  is an industry standard which is mandated in order to maintain good standing in the payments industry and HIPAA  is US legislation which governs the handling of patient health data.
The issue we face is that there is no equivalent for either of these in relation to handling of PII and identity data.
Why there're standards for cars, but no standards for computer systems? I think it's possible to create them. If there're standards, it's easy to define malpractice.
* Passwords should be stored only in salted and hashed form
* Code injection attacks shouldn't be possible
* Personal data should be stored in anonymized form with mapping between real and virtual id stored separately
* Only cryptographic algorithms from the approved list might be used (no MD5)
I can't see that happening if they do any kind of offsite back up and archiving. They will purge you from the current master, say they purged you, and you'll be none-the-wiser.
A breach of this law would cost a company 2-4% of their revenue as a fine. Seeing how these big companies operate there would be a lot of breaches.
I like GDPR, but a lot of people claim it to be too draconian. We'll see how it works out in EU.
Then, purging a person's data would come down to deleting that key from the system and from all backups of the keys.
That makes it a bit easier; the set of all keys will typically be a few orders of magnitude smaller than the data, and could be backed up using separate systems. Those systems wouldn't have to be updated often and access could be better controlled.
You would still need procedures checking nobody writes out non-encrypted data (including database keys), but that's doable; a first level scan would just run strings on your raw disks.
Disadvantage is that this would affect performance, especially for reporting services (a query gathering statistics over your customers would have to fetch all your customers' decryption keys)
A step up would be to hand out not bare decryption keys, but pairs (decryption key, expiration time stamp) encrypted with a private key that only your database knows the matching public key of. That allows your database to detect when your applications reuse decryption keys for too long. Depending on application architecture, that pair could even be a triple (decryption key, session key, expiration time stamp), and 'encryption' of course should use a salt.
This is probably part of the explanation.
Doctors, lawyers, and many other professions have such system, why can't we have it as well?
It's fundamentally different from malpractice in my opinion. In health care malpractice has obvious pieces of data - we know who the doctor is, we know their credentials, we know what information they had and when they had it, we know what they decided, what they prescribed, what they said.
Software engineering is a team based endeavor. Who exactly is responsible for unrecognized vulnerabilities? Everyone? No one? One dude who everyone sorta thought handled security stuff? It's as clear as mud.
Security team with people who do it full time. Betting your security on the one dude who sorta did everything should be criminal.
Aka, not this: http://i.imgur.com/a7S95nG.jpg
Here's a quote from Equifax's early release on the breach :
Equifax said that, it had hired a cybersecurity firm to conduct a review to determine the scale of the invasion.
So, to your question, I'm going with "no one", at least internally.
It's beyond belief (well, not really anymore); but, not only do they not have security covered internally (criminal in itself), but they don't even appear to have a regularly engaged cybersecurity firm. They had to go out and hire one post facto.
That's clearly negligence.
>What if management hires incompetent security team?
That's harder to do because you have to establish competence,
which has led to a bunch of hazing rituals via whiteboard for general software development and a lot of other insecurities. Being a security professional isn't regulated by law, so you can't check the law to determine if someone's competent. So who's opinion do you trust, and why do you trust their competence? An expert witness, maybe?
"That's clearly negligence."
Great so you just made it illegal or impossible to create a start up, congratulations.
All of this is under the context you'd be handling a lot of PII or sensitive information, in which case, yes, I don't want just any start up to work with PII without some kind of security team.
Exam specs: https://ncees.org/wp-content/uploads/2015/07/SWE-Apr-2013.pd...
In at least some of the areas where we really care about software quality (e.g., banking, medical devices) there are existing regulators who will fuck your shit up if you don't take certain aspects of quality seriously. Which is good, but I think it's part of why we don't have an industry-wide program.
Maybe we should take a lesson from Hammurabi:
"If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. If it kill the son of the owner the son of that builder shall be put to death." 
The occasional execution would probably make people much more serious about unit testing.
Just look at Toyotas "unintended acceleration" case. If their firmware engineers had access to static analysis tools (a few grand for a license), the bug would have been pointed out to them immediately. Instead, Toyota hired inexperienced engineers, deprived them of appropriate tooling, and pushed the cars out to the marketplace where they killed people. The result? Toyota was cleared of any wrongdoing. They're computers. They're too complicated. No one can know how they work.
I would love to see that change. Right now, though, we're in a big wave of "inequality is great", which I think strongly contributes to this problem. Let's hope that wave crashes, letting us start to hold executives and managers accountable.
Sometimes, instead of patching, the software should be decommissioned. Search in the news for planes which were grounded when serious flaws are found.
> Second, if you want to be cost-effective you must leverage many existing components of mostly unknown providence and quality.
There're different components for different kinds of requirements. You won't use components for two story buildings, to build a skyscraper.
> Finally the security aspect is extremely difficult because both the cost and risk of mounting an attack are extremely low.
If the risks are high, systems shouldn't be deployed. There's a reason we don't allow people to have machine guns for self defense.
It just needs to be industry-wide.
Software engineers don't need to be computer scientists, in the same way civil engineers don't need to be materials scientists.
There is a bootstrap process, and even in other industries not all engineers are PEs... but all projects are reviewed and stamped by PEs.
This industry resists because it's filled with CS folks who either can't or won't believe that there is anything more to engineering than data structures and algorithms trivia.
Imagine, that management tells their lawyers, we need to do it tomorrow, figure something out? Most likely, lawyers will either refuse to do the work, or will report the management to law enforcement.
Rather than compare it to doctors, lawyers, etc, I would compare it to structural and civil engineers. Those are the sorts of regulations we require. If a CEO of a construction company ignores warnings given by one of his structural engineers while building a bridge, that CEO is held responsible for criminal negligence and he is put in a prison for a long time. The same needs to happen for technology company management who cut the development timeline, deprive developers of adequate tools and work environment, and who hire inexperienced development staff simply because they're cheap.
Would you like to drive across a bridge if you knew the company operated the way tech companies operate? Viewing their engineers as a cost center to be reduced, as little more than spoiled typists whose technical concerns are always viewed as unimportant in the face of business goals, crammed into office spaces proven by over 1000 studies to damage productivity, and constantly pressured to rush through everything in defiance of basic biological fact that human beings are not capable of extended periods of mental exertion especially in the face of constant interruption? Would it make you more or less confident in that bridge if there were court precedent for companies resulting in peoples deaths being let off without punishment with such practices? That's the situation we're in.
"Critical systems" developers would need astronomically expensive insurance to even exist, and therefore prohibitively high salaries.
I personally believe there should be some measure of a corporate death penalty to emphasize the responsibility involved though.
Then, companies shouldn't have such a high concentration of risks in one place.
The problem wouldn't be such a disaster if just SSN wasn't enough to get a loan. For example, if we had a password in addition to SSN (stored in a hashed form), the problem would be much less severe.
True, but that's why good organizations almost always have technical people on the management team who advocate for the technical arm of the company and ensure that it is appropriately resourced.
If anything I'd think accounting is a better fit, but even then, I can't say I've met many people in the software industry who think very highly of any sort of existing test or certification; why would this one do so much better at measuring actual skill?
While writing secure code using best practices is a big part of the security equation, any company that stops there will be pwned. The most oft-used illustration in securing systems is "the onion". You need to layer on protections, from firewalls (both traditional and application layer), to solid access control to simply making sure that systems are configured properly (just hit up any SSL testing site and pop in just about every non-tech business' main web page - might as well just make the result page static with a big letter "F"). Heck, even technologies like ASLR/DEP are an extra layer.
The goal is to make an attacker have more than a few hurdles to hop in order to breach and to ensure that if you are breached, the value of what is exfiltrated is either worthless (i.e. properly hashed passwords) or detected and stopped before it's all gone (partial breaches aren't awesome, but it's easier to explain 1% of your data being leaked than it is to explain all of it being leaked).
 I've always liked that sarcastic advice of "If you and your friend encounter a bear ... run ... you needn't out run the bear, you only need outrun your friend". If you make things difficult enough, your attacker might move on to another target. And hopefully, if they succeeded in breaching part of the defenses, someone will discover it before they return and shore up what wasn't "perfect".
Ask people who've gone through that process how rigorous it is...
At $DAY_JOB our security falls into two buckets (1) PCI and (2) stuff that keeps us secure.
IDK if it's possible to have a widely accepted security standard that isn't checking nonsensical and out-of-date boxes.
1) Stuff that keeps you insecure (a.k.a ISO 27001 ISMS stuff)
2) Stuff that somewhat helps, but is covered by fluff (a.k.a. PCI-DSS)
3) Stuff that actually keeps you secure.
PCI-DSS at least gives you a sledgehammer to convince lazy low-level managers to dump ciphers like RC4 and encrypt some of their data. It's not utopia, and it does err on the side of perpetuating banks' infatuation with 3DES too much, but I saw getting good stuff done by using it as an excuse.
You still need to have a competent security team of course, but it helps them not being ignored. Well, sometimes.
I'd be happier to see licensing and accountability. But it would have to have significant teeth. E.g., companies can't build systems of type X without somebody licensed. If there are problems with the system, then the person with the license faces personal fines and risk of suspension or loss of license. That would be less bad than certification, but it could still substantially slow industry progress if the licensing review board had a conservative tilt to it.
Of course, the real problem with most places is not engineers not knowing. It's with managers who push for things to happen despite what engineers advise. Licensing could sort of fix that, in that it could force engineers to act like professionals and refuse to do negligent work. But it still lets shitty managers off the hook.
So what I'd really like to see is a regulatory apparatus for PII. In the same way that the EPA comes after you for a toxics spill, an agency would come after you for a data spill. They investigate, they report, they impose massive fines when they think it's warranted. And when they do ream a company for negligent management of data, executives at all the peer companies get scared and listen to the engineers for a while.
www.equifaxsecurity2017.com uses an invalid security certificate.
The certificate is not trusted because the issuer certificate is unknown. The server might not be sending the appropriate intermediate certificates. An additional root certificate may need to be imported.
Error code: SEC_ERROR_UNKNOWN_ISSUER
How do you explain to your father/grandfather/whoever that equifaxsecurity2017.com is ok, but equifax-security-breach.com, checkyourequifaxaccount.com,
equifaxsecurity-2017.com and equifaxsecurity2018.com are not legit?
Stick to your top level domain. Something like security2017.equifax.com or equifax.com/security2017 would be okay.
It is in the best interests of the settling company to keep distance between those sites and their primary domain.
[Edit: Ah, the chain is incomplete, see https://www.ssllabs.com/ssltest/analyze.html?d=www.equifaxse...]
We have the technology to build vastly superior replacements right now. It's mostly network effect requirements that make this extremely challenging/slow to implement.
An example of something we could do is cryptographically authenticated web-of-trust creditworthiness estimation, with techniques like proof of burn and selective trust anchoring used to establish terminal nodes in the unrolled trust DAG. This sort of thing would allow for pseudonymous, automated determination of trust without the extreme security and privacy risks posed by centralized identity stores like the credit bureaus.
Which means each and every line of code was written by the lowest bidder.
Leadership sets the priorities, and expectations. They get paid disproportionately more than other employees and I think they should be scrutinized and bear responsibility correspondingly.
But I have no doubt they probably found someone lower in the ranks as a scapegoat.
"Joe was in charge of patches. And we are all equally disturbed and horrified by his behavior. But we've reached out to him and let him go. Now give us more of your personal information so you can get free credit monitoring for 6 months [+]. -Sincerely and with deeper regrets, the Executive Team [++]"
[+] (fine print) then charged as $49.99 a month until cancelled. To cancel please visit one of the 3 Equifax location in person on the first Wednesday of the month. Accepting
[++] (even finer print) by accepting the free credit monitoring you agree to binding arbitration and forfeit your rights to participate in a class action suit against Equifax and its subsidiaries.
If he had a CS degree, that wouldn't make him any less responsible for this massive data leak.
If I got the story right, this bug was present for the last 9 years and patched upstream a couple days before the leak. Some measures could have prevented its exploitation or reduced its impact, like throttling by IP, one-time session keys and so on - and should be in place for any serious application - but it's entirely possible they had fixed schedule for patches and mis-evaluated this flaw as non-critical.
A LOT of companies carry obsolete dependencies for a long time.
• It's hard to find technically skilled people who want to spend all day doing management tasks.
• It's easy to find essentially unskilled people who do want to spend all day doing management tasks.
• There is a large set of unwritten rules and social expectations that the people who created and run such companies use as proxies for competence. Do you dress nice, can you play an enjoyable game of golf, are you married, how old are you, etc. These proxies invariably de-select the kinds of people who have a deep understanding of their field (i.e. single young men who are able to devote enormous hours to their craft).
Edit: note the recommendations in her LinkedIn page. Every single one talks about her collaboration and communication skills, not a single mention anywhere of technical skills. It's tempting to shoot "Susan M" here but the real issue is a boardroom culture in which management is seen as a skill entirely divorced from the effort being managed.
A meritocracy is not born when rich corporations (buyers of labor) select vendors (sellers of labor) based on personal connections and not ability to do the job
This is extraordinarily evident in the distribution of engineer salaries.
A "tech" company is a company for whom technology (ie. developers) is a profit center rather than a cost center.
I hope that this story will bring down the hammer on their heads - not just Equifax, all of them.
That sounds to me like I don't own my own data.
Creditor's public key, not private key.
However, while undeniably stupid, hopefully they have rate limiting in place so guessing the PIN would not be feasible even if you know the day the credit freeze was put into place.
pretty hacky implementation but oh well
There's an option to pick your own password during the previous step but having an automated one is the default option, so a lot of people miss it.
Your chance of being wrong in all 1000 guesses, assuming you guess randomly (not ensuring you never make the same guess twice) is (1-1/(60* 24))^1000
That's about .5
This is based on the assumption that you are guessing for a different person each time, so you can't increase your odds by eliminating any guesses you've already made. If you're guessing for the same person every time, you just need to guess half the possibilities. You can confirm this by realizing:
Your chance of being wrong in a guess is 1-1/(60* 24-x) where x is the number of guesses you have already made.
The product of 1-1/(60* 24-x) from x=0 to x=60* 24/2-1 is .5
Edit: hackernews formatting is weird
If you develop in-house software, you ARE A SOFTWARE COMPANY, whether you want to be or not.
Amazing how this good old boy network still thinks like it's 1970.
2) Search their twitter history for "I just froze my credit" or similar.
3) Try the PIN corresponding to t-1, t-2... t-5.
If you need more convincing, check out his blog at https://tonywebster.com/.
PS. Direct link to his post about the MN Court of Appeals outcome: https://tonywebster.com/2017/04/minnesota-court-of-appeals-d...
Edit: Can confirm: The pin is the exact date/time stamp that my freeze was applied. I'm able to tell based on another note saved to 1password. It is within 1 minute :(
Thank you for linking to more accounts of this.
It also prevents them from selling your credit information to credit card companies (and, I'm sure, insurance agencies and many other businesses). These businesses want a list of people with good credit history to market their wares to.
Essentially, you're "taking yourself off their sales shelf", so they do not want you doing this, and will make it as hard as they legally can.
Also note that you pretty much cannot get a loan while your credit records at these firms are frozen, but it's actually easy to get them un-frozen and then frozen again once you get the loan.
In fact, you should find out which credit agency your bank (car dealer, whatever) uses and then only unfreeze that one.
> Verified PIN format w/ several people who froze today. And I got my PIN in 2007—same exact format. Equifax has been doing this for A DECADE.
At a later date, I autorize credit card w to access some computed score that I compute and verify via other trusted means (another blockchain or whatever). Credit card w never sees all my data and the verifier gets access to only what it needs. Nobody needs to trust anyone and instead trusts the chain. Just an idea.
At previous employers, without going into terribly much detail, we had an asset that was treated with the kind of security that something like this should have been treated with. It was on a segregated network that could only be accessed through proxy hosts, requiring two-factor authentication. The proxy hosts were hardened (only the specific, needed, services/components installed/running, audited and firewalled to death). The devices in the secure network could not see the corporate network, let alone the Internet and the corporate network/internet could not see these devices. Even special 'management interfaces' for corporate devices were segregated. This was in addition to all of the rigor put in to securing each endpoint.
Companies need to realize that security is purely a defense related behavior. You have to be "perfect" 100% of the time, but your attacker need only be right a small number of times. The goal is to increase the number of times an attacker has to be right to get at your data. From ensuring your database accounts can only execute specific things, that your web servers are hardened and isolated to limit exposure, to properly configured firewalls (including application-layer firewalls/log analysis). And ensuring that employee access to high-value targets is as minimal as possible and protected thoroughly. There are both "preventative" and "reductive" technologies that need to be put in place. Preventative is designed to stop a breach, reductive is designed to ensure that if breached, the breach is either worthless (i.e. proper password hashing) or caught and interrupted before all of the data is exfiltrated. It's a lot easier to explain to investors (and your fellow countrymen) that a couple of million user accounts were exposed than it is to explain that 124 million of them left.
From the looks of it, it appears Equifax treats security like most large, non-tech businesses -- an expense that should be cut as deeply as possible. It's probably fitting that they have the word "fax" in their name. If I had a guess, they probably have mandatory security auditing requirements, they paid the least they could to meet that regulation, and got the answer they paid for (or found someone to give them the answer). I'll also guess that this PIN issue will turn out not to be the worst of the security practices in place -- I mean, how many weeks did they wait to report this?
 I have a few years' history at a large corporation working in and around security. I've seen the ugly, though I feel that we handled things very well (incredibly well compared against Equifax!)
 i.e. not their customers.
 I'm thinking in terms of a typical SQL server, where one can eliminate table/view level access in favor of stored procedures that limit what they provide and require a level of knowledge of the operation of the system (and can be tracked by logging in a manner that identifies behavior that's not normal).
 And is it just me being overly cynical or does anyone else think that they waited until a historic hurricane would dominate the news cycle before going public with it? It was pretty good timing, really -- coming right off of Harvey and right into Irma, it's easy to miss this story among the other big news (one 'general news/politics' site that I expected to see all kinds of headlines on had it quite low on the fold for a day and nowhere to be found, today). Or maybe they were just waiting to give time for more of their higher-ups to sell stock. /s
Besides, lots of "hacker" hackers are self-taught.