> Equifax had allowed over 300 security
certificates to expire, including 79 certificates for monitoring business critical domains.
(on page 2 of the Executive Summary)
I've been following the Equifax breach story but this is the first I'm hearing about the expired certificates. That is shockingly bad.
I'm a little disappointed in the final "conclusion" of the report, though. The end of the executive summary basically chalks the breach up to two things: "Equifax's IT management structure was complicated" and "Equifax uses legacy software that is hard to secure". These are valid points, but these are also issues that nearly every single major corporation in the world faces, and yet many of them still manage to prevent (or at least mitigate) major breaches. These aren't good enough reasons to explain why Equifax failed so spectacularly compared to every other bureaucratic company with legacy software.
Also, I know this report isn't meant to be a remediation strategy roadmap, but it's also pretty disappointing that the recommendations section is basically just 3 pages of fluffy, vague, "X and Y should work together to increase cybersecurity" bullshit. Such a high profile incident would have been a great time for the federal government to really show some leadership (or at least strong guidance) in this realm, but they really didn't. I mean hell, at least link your recommendations to the NIST Cybersecurity Framework...
I'm pretty tired of companies telling me that it's fine for them to hoover up extremely sensitive information like my social security number and then turning around after a breach and saying, "well, there was nothing we could do."
It can't be both. If it's impossible to secure companies, then maybe Merriott shouldn't be asking for anybody's real name when they sign up for a hotel. Maybe we should stop using credit agencies for identity verification and start investing government resources into a separate 2-factor system. Maybe you should have a legally protected right to lie to businesses that ask for your personal information.
Equifax leaked personal information for 50% of the US population. If you were voting, and there was a 50% chance that your ballet and voting history was going to be leaked publicly after the election, you would expect either:
A) Someone is so incompetent that they're going to jail, or
B) The system we're using is so fundamentally broken that we need to rethink the core paradigms of how it's built.
To me, a report like this sounds like the House is saying that where corporate security is concerned, B is the answer.
It's important to distinguish that it wasn't actually Marriott that had the data breach. It was starwoods resorts, now a marriott owned entity, but at the time of the breach it was not a marriott property. Marriott is being attributed to the guilty party b/c they now own starwood, but Marriott's systems were never breached, so marriott should keep doing what they are doing (presumably) and transition all the starwood systems over to the more secure marriott systems (which I believe they already said they are doing).
Got close to pushing them to a centralized database (instead of the per-property Access database), but left before we could finalize that project. Ugh, and the reports I had to design... they wanted smaller than 7 point font on legal paper that would then be faxed. Every property would fax quarterly reports generated by my software for board review.
That being said, between the recent Google+ breaches, to the older Target breaches, it increasingly feels like I'm flipping a coin when I trust companies with data.
Based on Marriott's handling of this breach, they seem to be decent at security. But I don't know how as a consumer I could tell that in advance of all of this.
M&A is a culprit in no small number of these cases, so let's be crystal clear: M&A does not absolve anyone of responsibility. Let me know when the underwriting bankers have their bonuses garnished for lack of due diligence, and then you can tell me about how "it wasn't actually Marriott that had the data breach". Let's say it loudly and clearly: no, it was Marriott that had the data breach.
So I don't actually feel a ton of ill will to them, even though I agree that doesn't absolve them of the fact that they bought it, and it is now very much their problem to deal with. It may not be your fault that the puppy that you bought isn't house trained, but I'm still not going to clean your carpet for you.
Having said that, this kind of underscores what I was talking about above. If Marriott themselves couldn't tell in advance that the company they were buying was an insecure liability, how the heck am I supposed to be able to tell?
If it's not feasible for a company like Marriott or Verizon to know in advance of an acquisition which companies are secure and which companies aren't, consumers have no chance. There's no feasible way for a consumer to protect themselves in that world.
Strongly disagree, this is playing with variables.
Marriott2016 + Starwood2016 = Marriott2017.
Marriott, the present day company, absolutely includes the company that had the "control or agency to stop it".
> it would have just meant the breach was someone else's problem
This isn't a wash. Tort is only effective if the party responsible gets punished, so it's very important which party gets punished. If Marriott had discovered the breach in due diligence, the Starwood investors' payout would have taken a big hit.
As it happens, there's two behaviors that need to be disincentivized: Starwood designed faulty systems, and pawned off its ramshackle legacy crap to the highest bidder; and Marriott2016 (much like Equifax) glommed together so many legacy systems that the likelihood of breach intensified (though to Marriott's credit, the attack doesn't seem to have escalated out of the former Starwood into the parent systems. I'd still like to see steep fines imposed, but way smaller than on Equifax, proportional to that contained scope).
The penalty on Marriott2017 should be steep enough to encourage future buyers to step up their due diligence enough to put the acquiree's payout at risk, while also rewarding Marriott for catching the leak before escalation.
> It may not be your fault that the puppy that you bought isn't house trained, but I'm still not going to clean your carpet for you.
I like your analogy a lot.
This is a good point that I wasn't considering. It's not like Starwood vanished when it got acquired. It's still there, it just got rolled up into Marriott. So even if I wasn't mad at Marriott2016, most of the people who I am mad at are currently working at Marriott2017.
> Starwood designed faulty systems, and pawned off its ramshackle legacy crap to the highest bidder
Also agreed on penalties, and that's a good way of phrasing the problem. I don't think that Marriott should be let off the hook for having to deal with the breach. And while I've been trying not to criticize their security response, their social response has basically been, "look over there, free credit monitoring," which is clearly insufficient.
Sending some fall guy to jail is shooting the messenger. The message from the stockholders is: we don't care about security or privacy. The message the feds should send back to them is: well you should.
Conversely, no one's going to "rethink the core paradigms" without some money on the line.
Establish standards for the value of stolen data. Something like the worst possible case. So for Equifax, which is potentially a gold mine for identity theft, fine them the average stolen identity cost (~$1300), for each of the 143 million records. 200 billion dollars (probably even 20 billion) seems it would be sufficient enough incentive to properly secure our data.
I also suspect we would need some kind of regular auditing, to ensure companies could afford a breach. Something like this would be a substantial drain on startups, as it would add another significant risk factor.
In the long run, what probably needs to happen is some kind of "data insurance", and we just expect that all companies working with our personal data carry it (or possibly even legislate that they carry it, similar to automobiles). It would make things easier for startups, who would pay much cheaper rates while their adoption was low, while also incentivizing them to limit their data collection to only what they needed.
Disclaimer: I work in the insurance industry, focused particularly on property insurance for disaster prone areas. Lately it feels like there are a lot of parallels between how personal data is stored these days, and Florida in the 90s.
Most organizations I've seen have a very complicated risk management formula to determine how much they're willing to spend on things like cybersecurity. I simplified it my original comment, but from what I've seen in the current state of things, insurance is typically on the side of the equation that justifies spending less on security, not more.
> Warren wants to eliminate the huge financial incentives that entice CEOs to flush cash out to shareholders rather than reinvest in businesses. She wants to curb corporations’ political activities. And for the biggest corporations, she’s proposing a dramatic step that would ensure workers and not just shareholders get a voice on big strategic decisions.
Warren hopes this will spur a return to greater corporate responsibility, and bring back some other aspects of the more egalitarian era of American capitalism post-World War II — more business investment, more meaningful career ladders for workers, more financial stability, and higher pay.
More details here:
On this point, I can see no other explanation except that government and business are colluding against the people whose data is being leaked in order to normalize a lack of privacy. Token fines are handed out on the order of seconds worth of annual profit, sometimes minutes, but nobody's going to jail, certainly nobody in the boardroom or executive bathroom. Because fuck you that's why.
Not only this, but we certainly aren't going to be able to sue as individuals over it, but why not? Why does personal data only have a value when someone sells it, why not at rest as well? These leaks contribute to financial criteria that are used to determine interest rates paid on loans, e.g., so a number value is calculable. I'm not holding my breath.
Target should have had to close stores to close the shortfall from leak penalties, and Equifax and Marriott should have their corporate charters dissolved.
Stern punishments for fraud/lying/cheating on security audits would be great, but punishments for "incompetence" would have the opposite effect.
I believe this happens in transportation sectors often enough to be an important skill in those industries.
To be sure, the subjective measure I'm talking about is laws that don't yet exist, but if you have a problem with law enforcement as a monster of subjectivity I can't speak to that.
most competent cybersec people are wary of inevitably being fired because of some random breach that is mostly out of their control
I would like to hear of instances where this has ever happened, and at any rate it would be an absolute-minimum penalty in a Target/Equifax/Marriott context.
punishments for "incompetence" would have the opposite effect.
I don't understand the scare quotes, but like I said (or tried to imply), bad things happening on someone's watch where best practices are established and not followed are criminally punished often enough to mention.
HN's probably going to frequency-limit me after this, so don't think I'm not reading just because pg won't let me post for the rest of the day.
There's a significant difference between cybersec and other sectors: most other sectors don't have groups of extremely well-funded, malicious attackers trying to circumvent everything you do. We send people to jail for failing to do things that are entirely in their control (a train conductor falling asleep, a CFO falsifying records, etc). But in situations where there are malicious outside forces, we don't punish the victims.
For civil engineers, sure they are legally liable for not building something up to established codes or if they knowingly defraud inspectors, etc, but if a terrorist blows up a bridge, we don't send the bridge's engineer to prison (nor do we send the director of the FBI to prison for failing to prevent the bombing). If a burglar is able to sneak past the security guard at the mall, maybe the guard gets fired, but we certainly don't send him or his boss to prison, either.
> I would like to hear of instances where this has ever happened, and at any rate it would be an absolute-minimum penalty in a Target/Equifax/Marriott context.
During my career of cybersec consulting I saw it often enough for it to easily be considered status quo. Security at most companies is basically like playing a game of hot potato: everyone knows it's just a matter of time until the next breach is discovered. This is especially the case in most companies that are actively looking for a CISO: the reason you're probably looking for a new CISO in the first place is because your last one was fired/"retired"/left, or you didn't have one in the first place.
So now a new CISO joins, and they're responsible for reforming what is most likely a steaming pile of shit. Unfortunately, it can take years to move the needle even a little bit in terms of security maturity. So the new CISO might be 9 months into revamping the security program, but hackers just breached the network using an exploit that was left there by the last guy and that the CISO just hasn't gotten anywhere near being able to remediate yet. Or, maybe the hackers used a 0-day against the company that literally nobody even knew about until today. There was nothing the CISO could realistically do, but the C-suite is pissed, stockholders are pissed, customers are pissed, and someone's head needs to roll. So who gets fired (or is forced to "retire")? The CISO, of course. (incidentally, this means a new CISO has to be hired, which is going to significantly delay any security remediation efforts, which just means the company is exposed even longer).
I've personally seen this happen at multiple companies, and I've heard countless other similar stories from my colleagues. At my (very large) consulting firm, almost everyone I worked with that had >10 years experience had been offered a CISO position at one of our clients, but almost all refused it, because like I said, it's pretty much an inevitability that taking that job will just result in them being used as the sacrificial lamb in 12-24 months when they get hacked by something that's hardly even their fault. CISO is, based on what I've seen after years in the industry, not a coveted position whatsoever.
It's also funny that you mentioned the Target hack specifically, as the Target breach wasn't caused by anything I'd consider even remotely close to "incompetence". At the time, Target was actually known (at least among my colleagues) for having one of the best cybersec teams in corporate America. They did everything by the book, and the breach wasn't caused by any unpatched vulnerabilities or misconfigured systems. It was caused by something that nobody at the time even knew to watch out for. It was pretty much as close to "they really did do their best, but unfortunately their best wasn't good enough" as can be. And if you start jailing people for doing their best, or even firing them for it, pretty soon we just won't have anyone working in cybersec at all.
> I don't understand the scare quotes, but like I said (or tried to imply), bad things happening on someone's watch where best practices are established and not followed are criminally punished often enough to mention.
The "scare quotes" are specifically around the world "incompetence" because of what it implies. If you consider every CISO who didn't achieve absolute 100% breach prevention as "incompetent", then that would be nearly every single CISO in the country. In the cybersec industry, there's a famous quote by the director of the FBI: "there are two types of companies: those that have been hacked, and those that just don't know yet that they've been hacked". (again, it's a game of hot potato). If that's not what you mean by "incompetence", okay, fine, but who does get to decide the definition of incompetence? Who gets to draw the line between "incompetent" and "competent but just unlucky"? The public? A jury? A judge? Cybersecurity is hardly an area that even the most educated judges and juries understand. Nobody with half a head on their shoulders is going to take the risk of jail time just because an 80 year old technophobe judge couldn't wrap their head around who was truly at fault for a 0-day breach.
You're saying that incompetence is not the reason that breaches are happening. Fine. But breaches are still happening, so maybe Target shouldn't be storing credit cards at all. I have insecure things in my life that I know are insecure. I know that probably at some point in my life someone, somewhere will break into my car, so I don't store tons of money and jewelry inside it.
We know what the solution is to a data store that's fundamentally insecure and can't be fixed -- you limit the amount of sensitive data inside of it so that when it is hacked, the damages are mitigated. We already see this in many large companies with their email retention policies. Many companies take the view, "we don't know when we're going to be embroiled in some random, crazy legal battle. So we delete old emails whenever we're legally allowed to."
I understand where you're coming from with the perspective that incompetence is hard to measure, and to an extent, I agree with you. But the data breaches are just as damaging to consumers regardless of who's at fault. I have very little sympathy for a company that says, "we shouldn't suffer because of a breach because we couldn't reasonably prevent it." Consumers can't prevent breaches.
Companies should face an equivalent risk to consumers, so that they're incentivized to reduce that risk in any way they can. That doesn't need to be at the CISCO level, I'd prefer it be at the shareholder level.
If there's a Occam-friendly alternative explanation I'm all ears, but I've been looking for a while and I haven't found anything.
A bunch of different parties failed at a really hard thing, lots of times over many years, and some laws that are tough to write well haven't gotten written yet. Collusion most foul.
We need better laws with stricter fines, and the pressure has to come from the voters and from the subject-matter-experts, because you're absolutely right that it won't come from the corporations anytime soon.
A bunch of different parties failed at a really hard thing, lots of times over many years
There's a concept in US law called the "attractive nuisance," and the fact that data leaks have happened "lots of times over many years" raises that charge to negligence if not recklessness.
Suit yourself. The only thing scarier than believing it's all a big conspiracy, is admitting that no one anywhere is in control.
"A loose affiliation of millionaires, billionaires, and baby..."
This is interesting. I almost always fudge my birthday a bit when asked for it. If I'm applying for a loan or opening a bank account or dealing with the government, of course I provide the correct information. But it feels like a really bad idea to provide my actual birthday to strangers whether on the Internet or not.
Gov't issued Yubikeys or a new gov't issued "smart ID" with a 2-factor system as part of it?
I can only imagine that this point becomes lawsuits that span decades.
It's absolutely ridiculous that any kind of a security team -- not to mention one that's supposedly safeguarding such an immense amount of data -- could let monitoring certificates expire at all. It's almost inconceivable that they could let seventy nine certificates that are required for security monitoring expire for months on-end.
I've worked in security for my whole career, and I'm firmly of the optimistic belief that someday, far fewer security-minded people will be needed to keep organizations secure. This is an excellent counterpoint to my argument, though: even the people who supposedly specialize in keeping things secure can apparently be absolutely clueless.
Even worse, the "seventy nine" figure was just the certs that were associated with "critical" systems. The total number of expired certs was at least 324.
> At the time of the breach, however, Equifax had allowed at least 324 of its SSL certificates to
expire. Seventy-nine of the expired certificates were for devices monitoring highly business
(from pg 70 of the report)
We don't really know what the certs were used for (eg, whether it was needed to decrypt traffic), or just as part of the reporting... And yes, the team is responsible for keeping them valid, but... this isn't the first time that cert expiration "broke" something and left us worse off than with no SSL at all.
It seems like these systems could only work by either using some heuristic like data volume to decide when something is being "exfiltrated", which isn't nearly as useful, or by whitelisting allowed communications, which seems absurd.
What am I missing?
On that proxy outbound traffic destination host or IPs would be compared to a list of known destination, and rejected if not match is found.
These methods are not 100%, but they do help add another layer to the security process.
Of course, now this means you just need to detect that sort of proxying, which may be easier.
Detecting exfil sucks and is hard.
I totally agree that 100% exfil detection is near impossible. But if you were going to try to get close to it, this is one of the world's databases where you really would want to. Their entire model is about collecting a shit ton of information and providing very heavily controlled access to it. Very well paid people should have spent a lot of time examining how to balance the need to pull in data from many sources, while providing only monetized ways to actually retrieve said data. They're like an old-timey pirate captain, who spends most of her time roaming all over the place in the hopes of being able to plunder someone else's treasure, then hiding said treasure in a safe place.
Except they really didn't take the hiding the treasure part very seriously. ;)
I would not consider the ability to proxy traffic much of a new capability for attackers. You can do it trivially with a single SSH command. Even a script kiddy with automated tooling should be able to handle this.
Agree that detecting exfil when you have sensitive data is very important though. Just that it takes a lot of work to protect against, and a ton of work to detect.
Won't be perfectly secure, but it diminishes a major area of risk.
(There are other architectures that will accomplish the same thing, the key is that if a machine can access the user databases, you should be drastically limiting what kind of outgoing connections it can make.)
This doesn't have to mean that they have full, unrestricted access. Access to data may mean that they could get raw records via some internal API. That API may still have logging, access control, DoS protection, etc.
Just like you can detect and prevent an external person from scraping your website in most cases, you can detect some internal service requesting way more than an average hourly number of records. Or with a better audit - specific verifiable entity (signed employee requests, external request id markers, etc.) requesting more than they should.
That assumes you have well designed system with multiple tiers and assume no trust. It's not possible if every service connects to the same database with yolo-root-access.
>Although the ACIS application required access to only three databases within the Equifax environment to perform its business function, the ACIS application was not segmented off from other, unrelated databases. As a result, the attackers used the application credentials to gain
access to 48 unrelated databases outside of the ACIS environment.
If you cannot store data securely then you should not profit from that data. Business models that insecurely store consumer data should not be viable.
* Network traffic capture for deferred analysis
* Egress analysis (including decrypting SSL/HTTPS)
The first approach is, IMO, more realistic. Although you can't prevent the exfiltration, you can detect it lazily after it's happened, including where it went, what was sent, etc. Although I'm sure several companies support this, [Eastwind Networks](https://www.eastwindnetworks.com/) does a great job for Cloud and on-prem workloads.
The second option, while more thorough, requires big boxes to run, doesn't scale well, is hard to install on clients (the only way I know to "break" HTTPS is to install a custom CA on all clients and MITM all public traffic, which would break for sites using HSTS and HTTP Public Key Pinning). It's only vaguely feasible, and is fraught with issues.
It's of course much more difficult than that in practice, but that's the general idea.
Sure lots of companies can manage legacy software, arguably though Equifax's target on their head is substantially larger than most companies. They are the holy grail of personal data. Nothing should be legacy with them
This is like a loss of primary containment at a nuclear facility and the writeup saying "nuclear plants are hard".
Color me flabbergasted.
But what were you expecting? That the people heavily sponsored by large corps like Equifax will write a nasty report against people that are basically paying their salaries?
Have you ever seen 911 report? After all the investigations, time and money spent, the report basically stated that.. there was a terrorist attack and buildings fell.
Long time ago I gave up on buying popcorn when high caliber political committees reports come out. Bottom line is, "this is America and this is business," and Equifax is in business of making billions of dollars, even if someone forgets to renew expired certificate, the caravan goes on.
Their monitoring system might have been using TLS to communicate the events to their aggregation tool, and when the cert expired, you don't really want log data with potentially confidential/critical security information traversing insecure channels, so they may have had it configured to not send any data if the cert wasn't valid.
As for warnings about the cert, it's possible they (stupidly) configured it to not send warnings, or maybe it was sending warnings but nobody was paying attention. I've seen situations before where such warnings were set to go to XYZ person's mailbox, but XYZ person leaves the company and nobody remembered to update the destination address for the alerts.
I'm disappointed this is recommendation 6, but at least it is in there. I'm also disappointed that they suggest the executive fix this problem instead of legislating a solution. Hopefully they take some action on their own recommendation!
Come up with an alternative plan, give businesses some reasonable number of years to stop relying on them, and after that point they'll no longer be issued to anyone new who's born.
Or if you don't want to get rid of them, treat them like public information. Give businesses X years, and then say, "past this point, we're just going to make these searchable by anyone who makes a request." They're an ID number, not a password.
I hope some of the identity verification blockchain companies succeed for this sort of thing.
What we have now is laughably bad.
If Congress mandates a solution, then it'd be like the VHS stuff all over again. Congress writes a thingy about VHS in the 1980s, and its completely irrelevant 10 years later. (If a law states that something with VHS is done a certain way, will it apply to DVDs or BluRays when they are invented 10 years later? Or to streaming media 20 years later??)
The Executive Branch is the one that actually runs the government. Legislative Branch / Congress sets policies, but shouldn't set solutions. Law goes out of date incredibly quickly.
Ex: If Congress says that RSA Tokens are to be used instead of SSNs, what happens if a better invention (ex: Google Titan) comes out? Furthermore, even if Congress writes a certain policy down (ex: Two Factor Authentication is necessary to protect bank accounts), the Executive Branch is still the ones who enforce the matter.
So in the case of Two Factor Authentication (legal requirement of banks to protect your bank account), the Executive Branch says that "3-personal questions + Password" counts as two-factor security in the USA. And that's why you have so many banks implementing "3-secret questions".
So regardless, the job will come down to the Executive Branch.
Right now, even if they wanted to, the executive can't force a national ID.
That's still too specific. "national ID with a cryptographic token" means that OpenID / Paypal-based single-sign on logins are illegal.
Yeah, writing laws is hard as heck, and Congress is NOT the experts in the field of cryptography / login security. So Congress will get it wrong if they even tried to write the law in that manner. Executive Branch CAN hire experts (ie: 18f, NIST, etc. etc.) to define best practices.
As such, the proper legislative solution would be to mandate "industry best practices of identity protection, as defined by (Insert Agency Here, maybe NIST)".
Why? Why doesn't it just mean that there's a canonical government system that doesn't touch PayPal or OpenID that can be used for stuff that would currently be tied to your social security number?
You have a point on OpenID, but my overall point is that OpenID would probably be sufficient for most financial transactions on today's internet. OpenID is basically equivalent to Paypal's security model.
Neither Paypal nor OpenID require 2-factor or security tokens. I think they're an optional feature. But in any case, the Paypal / OpenID model of identification (Paypal controlled website verifies password, and sends a token to the 3rd party website) would be sufficient for today's security.
Of course, none of this even touches upon Equifax. Equifax uses SSNs to identify people, and often without their knowledge or consent. Equifax is a service to the financial industry, to help keep tabs on individual's history.
So the Paypal model will NOT work with Equifax's use case, because the individuals don't always know when they are being tracked. Maybe it'd have to be something to OAuth's model.
In any case, I'm not an expert either. I just appreciate the difficulty of this problem.
There's a lot of things that use public/private keys however, or security tokens, or whatnot. Should it be a smartphone app? A hardware dongle? Etc. etc. If a hardware dongle, which one?
As such, its the Executive Branch's job to research the various technologies, and implement a new standard to solve the online identity problem.
For example, 18f (White House's crack website team) has the following: https://login.gov/
Github code here:
If single-sign on were widely deployed across US Agencies (and tied to financial services / private sector banks), we'd be in a way better place.
In any case, this is clearly the realm of the Executive Branch. Specifically 18f probably should continue to lead the effort, as they have been.
SSNs are fine and useful. They just shouldn't be the "password" to financial systems. When every damn bank uses "Whats your pin and SSN" to gain access to an account... that's the problem.
The issue is that private companies use SSNs for security. There's nothing wrong in using SSNs as an identifier.
The USA needs to start assuming that SSNs are public information, and to build security through other means. SSNs were never a secret number to be used for authentication / authorization purposes.
There's plenty wrong with this. SSNs were already a pretty bad identifier when they were invented and didn't get better.
There's no check digit, so if you make any mistake at all it's probably a different person's valid SSN.
Until relatively recently they were assigned both chronologically and geographically, so one digit error in SSN plus Address plus date of birth... Causes no red flags, it looks about right. Modern ones are randomly assigned, but it'll be decades before that's most people.
Like most similar systems the US SSN also very explicitly says it isn't suitable for any purpose except numbering Social Security recipients. People you want numbers for may or may not have SSNs. Don't worry though, since they all look pretty similar and there's no check digit the people who don't have one long learned to give a bogus SSN, probably belonging to somebody else. Somebody you know probably does this today. What a useful "identifier".
> Don't worry though, since they all look pretty similar and there's no check digit the people who don't have one long learned to give a bogus SSN, probably belonging to somebody else.
Its trivial to use US Government sources to do a name-check on the SSN, to ensure that you've got the right person.
The latter costs money, but not much money in terms of doing business.
The fact remains: People use SSNs because the Social Security Administration is very good about keeping SSNs in order. Perhaps the number could be designed a bit better, but the system has been in use since the 1930s and there's no better number to use in the USA in most cases.
> Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a
complex IT environment. Equifax ran a number of its most critical IT applications on custombuilt legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made
IT security especially challenging. Equifax recognized the inherent security risks of operating
legacy IT systems because Equifax had begun a legacy infrastructure modernization effort. This
effort, however, came too late to prevent the breach.
As someone who works in Tech M&A, I often tell clients "hackers go after the weakest link and you just acquired a new link". They nearly unilaterally ignore this advice and ignored hardening even the smallest of acquisitions, because well, "growth". Someday people will learn.
Not convinced of this at all. There just isn't sufficient financial incentive.
The recommendation is essentially "Try to convince the public and private sector to use them less." But I'd argue it is well passed time that SSNs be replaced by something fit for purpose. SSNs were never designed to be a unique form of ID, and using things like the cardboard card as further verification is almost comical.
I'd like to see an aggressive alternative that uses the best of our security knowledge and then have it vetted by everyone in the security industry with a pulse. We've seen other countries try this. But most of those countries outsource it to the lowest government bidder, who hide the inner workings behind proprietary claims, and never vet the resulting proposal.
Instead we need something more akin to the United States Digital Service, a publically created proposal (fully released specs) that is vetted by every academic and security expert they can find.
The hardest part will be saying "no" to requirements creep. Allow certain government agencies to continue to use SSNs for now, and have the new ID "flip" into an SSN behind the scenes. Better than needing five hundred different departments to adopt the new standard before it can go live.
It's gov, so they can do more. It can also be "By 202x using SSN for identification in non-SS purposes becomes illegal."
A company like Bear Sterns got "killed", Enron and others got litigated out. But it looks like Equifax did not face any consequences. Its high time we treat data as an asset class and regulate accordingly. Particularly personal information is acquired by every company and is treated as a valuable commodity. Companies get acquired purely for the amoutn of data they have. The market has already declared it as an asset why is it not regulated?
Enron was convicted of massive, deliberate accounting fraud. The company wasn't viable without the fraud.
Not even remotely similar to the Equifax breach.
The security patch here is not a technical solution. It's to wipe out the stockholders of any company this negligent, and repossess the spare homes of the entire C-suite. That should cover most attack vectors pretty reliably.
If anything, the report says the opposite of that.
> It's to wipe out the stockholders of any company this negligent, and repossess the spare homes of the entire C-suite.
This is why we don't get any real reform...citizens have these insane ideas on how to "fix" the problem. How can the government take you seriously?
My comment was not aimed at agreeing with the majority report. Would you dispute my characterization of those expired certs as "negligence"? Or perhaps you don't consider the budget meetings that declared how low cybersecurity fell on the list of priorities as "fairly deliberate"?
> insane ideas
Hmm, maybe you're right. Monetary penalties for undesirable behavior is insane. Let's repeal all tort laws and cancel criminal fines, and make no attempt to levy proportional damages for anything. Let the C-suite keep their options!
> How can the government take you seriously?
If all else fails, a yellow vest might help. I'm hoping it doesn't come to that here.
I would. Does the report indicate that the cert was expired due to budget issues? Does it show that cybersecurity was low priority? Again, the report shows the opposite: they had solid policies and procedures in place, but the failures seem to be in execution and training.
> Monetary penalties for undesirable behavior is insane.
No, that's just fine. And Equifax took a huge stock hit, had to make a potentially cash generating service free for everyone, has to earn back its reputation, etc.
> Let's repeal all tort laws and cancel criminal fines, and make no attempt to levy proportional damages for anything. Let the C-suite keep their options!
Show the criminal activity that resulted in the breach (by Equifax, not the attackers). Show the damages that have resulted from the breach.
> If all else fails, a yellow vest might help. I'm hoping it doesn't come to that here.
If it does, it won't be over the Equifax breach (or even the other 4 larger breaches: https://abcnews.go.com/Technology/marriotts-data-breach-larg...).
> Does it show...
> the report shows...
I'm not asking you whether the House Republicans would call the expired certs negligent, or the security underprioritized. I'm asking you, as the reader of a technical forum. Do you believe everything the government tells you? Or here, what one party tells you? At the very least, read the minority report linked elsewhere in this thread, and interpolate.
> huge stock hit
Nowhere near huge enough, and it rebounded once it was clear there were basically no consequences.
> has to earn back its reputation
Why? Equifax could care less what its unwilling inventory thinks of it.
> Show the criminal activity...
Name one victim whose damages are large enough to merit hiring a lawyer to comb all the contracts to identify the line where a bank promised that Equifax promised that the victim's data would be safe, and I'll show you a breach of contract. Hint: it's a different line for each victim.
> Show the damages...
Precisely my point: the damages to the public are "massively distributed" and infeasible to pin down, which is why there should be a criminal penalty, like we do for say, ozone-depleting chemicals.
> it won't be over the Equifax breach
(sigh) you're probably right.
No, that's not negligent. Nor was security "underprioritized". If you read the reporting on the issue (whether the government report or just reputable news), it show solid processes and procedures, but there were implementation flaws. There is no amount of security that will ever be perfect.
> Do you believe everything the government tells you? Or here, what one party tells you? At the very least, read the minority report linked elsewhere in this thread, and interpolate.
Considering that the report matches all the other reputable reporting on the issue? Yes, I believe it. Equifax is claiming "factual errors", but I'd have to see their response before commenting on that.
> Nowhere near huge enough
That's your opinion, but the market has decided otherwise.
> Equifax could care less what its unwilling inventory thinks of it.
Correct. I was talking about their customers...the businesses that supply and use their data and services.
> Name one victim whose damages are large enough to merit hiring a lawyer to comb all the contracts to identify the line where a bank promised that Equifax promised that the victim's data would be safe, and I'll show you a breach of contract. Hint: it's a different line for each victim.
Once you do that, prove that the information came from the Equifax breach, and not one of the dozens of other breaches, let alone the 4 other large breaches I cited.
> the damages to the public are "massively distributed" and infeasible to pin down, which is why there should be a criminal penalty, like we do for say, ozone-depleting chemicals.
Or their negligible and impossible to detect at any meaningful level.
> (sigh) you're probably right.
It's troubling that you seem to think violence is some sort of solution to any problem, let alone this problem.
Like do you understand what an externality is? When I say it's not huge enough, it's pretty clear that I don't mean Equifax is overvalued & people should all go out and short it. The lawmakers and regulators have decided that the externality will have no significant consequences. The market hardly decides anything here; it reacts.
My reading of the reports is that this company was roughly in line with common practices. My point is that common practices must change.
No, they just haven't finished yet. There's already been some new regulation, and I believe more to come, for the credit industry as a whole.
> The market hardly decides anything here; it reacts.
If Equifax's customers decided that they couldn't safely do business with it, then Equifax would cease to exist.
> My reading of the reports is that this company was roughly in line with common practices. My point is that common practices must change.
I certainly agree there, wholeheartedly. But you seem to be singling out Equifax, when it's an industry wide problem.
Enron went out of business due to losing an enormous amount of money.
Neither of these things had much to do with government regulation.
Keeping that data secret isn't the only job it has. In fact that isn't even really their primary job. Their primary job is evaluating the credit worthiness of potential customers for equifax clients. As far as I know, they are still doing that job quite well.
Again, Bear Stearns was insolvent due to being overextended, and Enron was propped up by massive accounting fraud (like WorldCom). Equifax, on the other hand, was the victim of a cyberattack.
What tends to matter is whether your name is toxic to your customers, not whether your name is toxic to your unwilling inventory.
It is easy to fall in the trap of seeing the most miniscule of vulnerabilities and dismissing it as "no one could ever possibly utilize that as a vector, it's not critical."
But that miniscule vulnerability becomes a single link in a ladder to everything in the system. Every seemingly-small vulnerability matters, like this painfully shows.
 referenced here: https://blog.hellobloom.io/how-hard-was-the-equifax-hack-a3b...
Another report from the committee's minority is also available.
https://democrats-oversight.house.gov/sites/democrats.oversi... Minority Report - FINAL 12-10-2018.pdf
Key recommendations from the minority report:
"Based on the investigation conducted by the Committees, four key legislative reforms proposed by Democrats would help prevent future cyberattacks:
[A] hold federal financial regulatory agencies accountable for their consumer protection oversight responsibilities;
[B] require federal contractors to comply with established cybersecurity standards and guidance from the National Institute of Standards and Technology (NIST);
[C] establish high standards for how data breach victims should be notified;
[D] and strengthen the ability of the Federal Trade Commission (FTC) to levy civil penalties for private sector violations of consumer data security requirements."
On [B], they note that "Equifax was a federal contractor at the time of its data breach".
On [D], they note that "In the three years before the Equifax data breach, the company spent only about 3% of its operating revenue on cybersecurity—less than the company spent on stock dividends...Civil penalties would incentivize private sector companies to prioritize and invest in continually upgrading and deploying modernized IT solutions and applying cybersecurity best
Speaking of which... why is it only ~50% of the adult population in the U.S.?
If the intruders were going around the Equifax network at will (which from the report it
appears they were). We should assume 100% of the data was breached.
That was a different portal and a different breach.
From my understanding of FEDRAMP, all of the things that Equifax failed to do should be already covered. Software patching, isolation of data, audit trails etc. etc. Seems more like a massive auditing fail.
One difference is that previously Equifax claimed a single employee failed to scan and patch a system. I don't see a reference to that in the report. All I see now is that someone scanned a system improperly:
> The scan did not identify any components utilizing an
affected version of Apache Struts. Interim CSO Russ Ayres stated the scan missed identifying the vulnerability because the scan was run on the root directory, not the subdirectory where the Apache Struts was listed.
Now pardon me while I go route my patch management procedures through the nearest baffling and inane dependency.
A senior Equifax official was terminated for failing to forward an email – an action he
was not directed to do – the day before former CEO Richard Smith testified in front of Congress.
This type of public relations-motivated maneuver seems gratuitous against the back drop of all
1970s? Am I reading that right? HTML wasn't even developed yet.
It was probably developed very quickly, possibly outsourced, and just stuck in front of the older system with minimal re-engineering.
Many years ago, I worked on a system that put an X Windows front end in front of a mainframe app that used a 3270 emulator to interact with parts of the legacy app. I imagine this is somewhat similar.
For the non physical world I have some ideas
- The entire infrastructure of IT can be rebuilt in an automated fashion and is done so in a prod-parallel equivalent at least weekly
- Any chnage to "vital" files on any server is audited
I feel for them (not!). BUT they shouldn't store any valuable data then. They should be not-insurable.