Hacker News new | comments | ask | show | jobs | submit login
House Oversight Committee Report on Equifax Breach [pdf] (house.gov)
238 points by chair6 37 days ago | hide | past | web | favorite | 139 comments



> The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate.

> Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains.

(on page 2 of the Executive Summary)

I've been following the Equifax breach story but this is the first I'm hearing about the expired certificates. That is shockingly bad.

I'm a little disappointed in the final "conclusion" of the report, though. The end of the executive summary basically chalks the breach up to two things: "Equifax's IT management structure was complicated" and "Equifax uses legacy software that is hard to secure". These are valid points, but these are also issues that nearly every single major corporation in the world faces, and yet many of them still manage to prevent (or at least mitigate) major breaches. These aren't good enough reasons to explain why Equifax failed so spectacularly compared to every other bureaucratic company with legacy software.

Also, I know this report isn't meant to be a remediation strategy roadmap, but it's also pretty disappointing that the recommendations section is basically just 3 pages of fluffy, vague, "X and Y should work together to increase cybersecurity" bullshit. Such a high profile incident would have been a great time for the federal government to really show some leadership (or at least strong guidance) in this realm, but they really didn't. I mean hell, at least link your recommendations to the NIST Cybersecurity Framework...


If the House's conclusion is that software of this complexity is impossible to secure, then it seems reasonable that we should treat any data stored in it as insecure. Maybe companies as complex as Equifax shouldn't have access to the data they have access to.

I'm pretty tired of companies telling me that it's fine for them to hoover up extremely sensitive information like my social security number and then turning around after a breach and saying, "well, there was nothing we could do."

It can't be both. If it's impossible to secure companies, then maybe Merriott shouldn't be asking for anybody's real name when they sign up for a hotel. Maybe we should stop using credit agencies for identity verification and start investing government resources into a separate 2-factor system. Maybe you should have a legally protected right to lie to businesses that ask for your personal information.

Equifax leaked personal information for 50% of the US population. If you were voting, and there was a 50% chance that your ballet and voting history was going to be leaked publicly after the election, you would expect either:

A) Someone is so incompetent that they're going to jail, or

B) The system we're using is so fundamentally broken that we need to rethink the core paradigms of how it's built.

To me, a report like this sounds like the House is saying that where corporate security is concerned, B is the answer.


>>> maybe Merriott shouldn't be asking for anybody's real name when they sign up for a hotel

It's important to distinguish that it wasn't actually Marriott that had the data breach. It was starwoods resorts, now a marriott owned entity, but at the time of the breach it was not a marriott property. Marriott is being attributed to the guilty party b/c they now own starwood, but Marriott's systems were never breached, so marriott should keep doing what they are doing (presumably) and transition all the starwood systems over to the more secure marriott systems (which I believe they already said they are doing).


Oh, interesting! I was a Sr. Software consultant for Starwood back in the day. They had an interesting, grandfathered, tax scheme that allowed for insane growth. Anyway, I wrote the Sales Management and Revenue Tracking System (their name, I pushed for Revenue Tracking For Management) that helped them to prioritize and track rates across all their properties in the mid-90s.

Got close to pushing them to a centralized database (instead of the per-property Access database), but left before we could finalize that project. Ugh, and the reports I had to design... they wanted smaller than 7 point font on legal paper that would then be faxed. Every property would fax quarterly reports generated by my software for board review.

Interesting times.


That's a really good point, Marriott just had an unlucky purchase.

That being said, between the recent Google+ breaches, to the older Target breaches, it increasingly feels like I'm flipping a coin when I trust companies with data.

Based on Marriott's handling of this breach, they seem to be decent at security. But I don't know how as a consumer I could tell that in advance of all of this.


> Marriott just had an unlucky purchase

M&A is a culprit in no small number of these cases, so let's be crystal clear: M&A does not absolve anyone of responsibility. Let me know when the underwriting bankers have their bonuses garnished for lack of due diligence, and then you can tell me about how "it wasn't actually Marriott that had the data breach". Let's say it loudly and clearly: no, it was Marriott that had the data breach.


Eh. The breach happened before Marriott had any control or agency to stop it. Their due diligence in buying the company wouldn't have protected any consumers, it would have just meant the breach was someone else's problem. My understanding (of course, correct me if I'm wrong) is actually that their purchase is the reason the breach was disclosed -- Marriott buying the company and doing its own internal audit on their systems is why we know about it now.

So I don't actually feel a ton of ill will to them, even though I agree that doesn't absolve them of the fact that they bought it, and it is now very much their problem to deal with. It may not be your fault that the puppy that you bought isn't house trained, but I'm still not going to clean your carpet for you.

Having said that, this kind of underscores what I was talking about above. If Marriott themselves couldn't tell in advance that the company they were buying was an insecure liability, how the heck am I supposed to be able to tell?

If it's not feasible for a company like Marriott or Verizon to know in advance of an acquisition which companies are secure and which companies aren't, consumers have no chance. There's no feasible way for a consumer to protect themselves in that world.


> The breach happened before Marriott had any control or agency to stop it.

Strongly disagree, this is playing with variables.

Marriott2016 + Starwood2016 = Marriott2017.

Marriott, the present day company, absolutely includes the company that had the "control or agency to stop it".

> it would have just meant the breach was someone else's problem

This isn't a wash. Tort is only effective if the party responsible gets punished, so it's very important which party gets punished. If Marriott had discovered the breach in due diligence, the Starwood investors' payout would have taken a big hit.

As it happens, there's two behaviors that need to be disincentivized: Starwood designed faulty systems, and pawned off its ramshackle legacy crap to the highest bidder; and Marriott2016 (much like Equifax) glommed together so many legacy systems that the likelihood of breach intensified (though to Marriott's credit, the attack doesn't seem to have escalated out of the former Starwood into the parent systems. I'd still like to see steep fines imposed, but way smaller than on Equifax, proportional to that contained scope).

The penalty on Marriott2017 should be steep enough to encourage future buyers to step up their due diligence enough to put the acquiree's payout at risk, while also rewarding Marriott for catching the leak before escalation.

> It may not be your fault that the puppy that you bought isn't house trained, but I'm still not going to clean your carpet for you.

I like your analogy a lot.


> Marriott2016 + Starwood2016 = Marriott2017.

This is a good point that I wasn't considering. It's not like Starwood vanished when it got acquired. It's still there, it just got rolled up into Marriott. So even if I wasn't mad at Marriott2016, most of the people who I am mad at are currently working at Marriott2017.

> Starwood designed faulty systems, and pawned off its ramshackle legacy crap to the highest bidder

Also agreed on penalties, and that's a good way of phrasing the problem. I don't think that Marriott should be let off the hook for having to deal with the breach. And while I've been trying not to criticize their security response, their social response has basically been, "look over there, free credit monitoring," which is clearly insufficient.


Maybe this is going to become a thing for future buy outs. Not only do the accounts have to dig into the financing and books, but the IT forensics teams will need to look for any unknown hack liabilities. If a hack is found, then the sale doesn't have to go through. Of course, the hack should have to be disclosed.


Won't happen if the incentives aren't there.


Wouldn't the incentive be to not have to take on the liability for it? Then again, if all a major breach results in a slap on the wrist, then I guess there's no real liability. Maybe you're right.


Exactly. I think we're in agreement.


You are 100%, and it's not just the data we provide to companies we buy products from. I got snail mail last week from an employer I haven't worked for in almost 3 years that said an administrator accidentally sent a mass email to a few thousand individuals with my bank account information, and the bank account information of everyone that was on that email chain (and I'm assuming a few other prior employees). WTF.


Is that not just a matter of Marriott failing to do their due diligence? How does buying an incompetent company absolve you from responsibility?


It doesn't absolve you from responsibility for the harms done as a result of an action and for the financial liabilities that follow from that. But if we as engineers are evaluating the competence of an organization, it does mean that "Marriott was incompetent at security" is false.


C) Wipe out the stockholders

Sending some fall guy to jail is shooting the messenger. The message from the stockholders is: we don't care about security or privacy. The message the feds should send back to them is: well you should.

Conversely, no one's going to "rethink the core paradigms" without some money on the line.


I think this is probably where this needs to go (prison time is interesting, but if we can't get it for financiers who profit off of crashing the economy, we'll never get it for ostensible negligence).

Establish standards for the value of stolen data. Something like the worst possible case. So for Equifax, which is potentially a gold mine for identity theft, fine them the average stolen identity cost (~$1300), for each of the 143 million records. 200 billion dollars (probably even 20 billion) seems it would be sufficient enough incentive to properly secure our data.

I also suspect we would need some kind of regular auditing, to ensure companies could afford a breach. Something like this would be a substantial drain on startups, as it would add another significant risk factor.

In the long run, what probably needs to happen is some kind of "data insurance", and we just expect that all companies working with our personal data carry it (or possibly even legislate that they carry it, similar to automobiles). It would make things easier for startups, who would pay much cheaper rates while their adoption was low, while also incentivizing them to limit their data collection to only what they needed.

Disclaimer: I work in the insurance industry, focused particularly on property insurance for disaster prone areas. Lately it feels like there are a lot of parallels between how personal data is stored these days, and Florida in the 90s.


Doing cybersec consulting, the majority of the clients I worked with already had data breach insurance. The funny thing is that such insurance actually just makes it easier for these companies to cheap out on actual security. What's better for the bottom line, paying $50 million for a strong security program (that will probably get hacked anyway), or pay $20 million for a good insurance policy that will just cover all our damages when we get hacked? (the actual path chosen is usually somewhere in the middle, but I hope you see my point)


If the insurance company had to pay out $20B, the plans would either cost a _lot_ or they would have tons of actually legitimate requirements with inspections (like you see with fire insurance of super expensive buildings: you don't get to just buy cheap fire insurance without having tons of procedures in place for mitigating fire).


The data breach insurance I'm aware of does have such requirements, but cybersec standards change so rapidly, and it costs so much to do any decent inspection, that in practice it just doesn't work as well. It's not like a building where fire codes are more or less the same from year to year, and where a fire inspection can be done in less than a week (if not a single afternoon).

Most organizations I've seen have a very complicated risk management formula to determine how much they're willing to spend on things like cybersecurity. I simplified it my original comment, but from what I've seen in the current state of things, insurance is typically on the side of the equation that justifies spending less on security, not more.


Equifax's market cap is $11B, so yeah, I'd say a $20B fine would get them to stop mishandling user data, as they'd go bankrupt.


That sends a very loud message to shareholders of other companies: make sure that management is securing whatever data the company has or risk having your entire investment be wiped out.


It's not particularly about security and privacy, but this would also be a step in that direction:

> Warren wants to eliminate the huge financial incentives that entice CEOs to flush cash out to shareholders rather than reinvest in businesses. She wants to curb corporations’ political activities. And for the biggest corporations, she’s proposing a dramatic step that would ensure workers and not just shareholders get a voice on big strategic decisions.

Warren hopes this will spur a return to greater corporate responsibility, and bring back some other aspects of the more egalitarian era of American capitalism post-World War II — more business investment, more meaningful career ladders for workers, more financial stability, and higher pay.

https://www.vox.com/2018/8/15/17683022/elizabeth-warren-acco...

More details here:

https://www.theguardian.com/commentisfree/2018/aug/18/capita...


Someone is so incompetent that they're going to jail, or

On this point, I can see no other explanation except that government and business are colluding against the people whose data is being leaked in order to normalize a lack of privacy. Token fines are handed out on the order of seconds worth of annual profit, sometimes minutes, but nobody's going to jail, certainly nobody in the boardroom or executive bathroom. Because fuck you that's why.

Not only this, but we certainly aren't going to be able to sue as individuals over it, but why not? Why does personal data only have a value when someone sells it, why not at rest as well? These leaks contribute to financial criteria that are used to determine interest rates paid on loans, e.g., so a number value is calculable. I'm not holding my breath.

Target should have had to close stores to close the shortfall from leak penalties, and Equifax and Marriott should have their corporate charters dissolved.


Sending someone to jail just because they're deemed "incompetent" by some subjective measure is a good way to make sure that nobody ever takes that job position. Chief Info Sec Officer positions are already hard enough to fill because most competent cybersec people are wary of inevitably being fired because of some random breach that is mostly out of their control. If you add even just the threat of jailtime onto that, hardly anyone is even going to dare taking on the CISO role. And that certainly doesn't do anything to make our data more secure.

Stern punishments for fraud/lying/cheating on security audits would be great, but punishments for "incompetence" would have the opposite effect.


Sending someone to jail just because they're deemed "incompetent" by some subjective measure is a good way to make sure that nobody ever takes that job position

I believe this happens in transportation sectors often enough to be an important skill in those industries.

To be sure, the subjective measure I'm talking about is laws that don't yet exist, but if you have a problem with law enforcement as a monster of subjectivity I can't speak to that.

most competent cybersec people are wary of inevitably being fired because of some random breach that is mostly out of their control

I would like to hear of instances where this has ever happened, and at any rate it would be an absolute-minimum penalty in a Target/Equifax/Marriott context.

punishments for "incompetence" would have the opposite effect.

I don't understand the scare quotes, but like I said (or tried to imply), bad things happening on someone's watch where best practices are established and not followed are criminally punished often enough to mention.

HN's probably going to frequency-limit me after this, so don't think I'm not reading just because pg won't let me post for the rest of the day.


> I believe this happens in transportation sectors often enough to be an important skill in those industries.

There's a significant difference between cybersec and other sectors: most other sectors don't have groups of extremely well-funded, malicious attackers trying to circumvent everything you do. We send people to jail for failing to do things that are entirely in their control (a train conductor falling asleep, a CFO falsifying records, etc). But in situations where there are malicious outside forces, we don't punish the victims.

For civil engineers, sure they are legally liable for not building something up to established codes or if they knowingly defraud inspectors, etc, but if a terrorist blows up a bridge, we don't send the bridge's engineer to prison (nor do we send the director of the FBI to prison for failing to prevent the bombing). If a burglar is able to sneak past the security guard at the mall, maybe the guard gets fired, but we certainly don't send him or his boss to prison, either.

> I would like to hear of instances where this has ever happened, and at any rate it would be an absolute-minimum penalty in a Target/Equifax/Marriott context.

During my career of cybersec consulting I saw it often enough for it to easily be considered status quo. Security at most companies is basically like playing a game of hot potato: everyone knows it's just a matter of time until the next breach is discovered. This is especially the case in most companies that are actively looking for a CISO: the reason you're probably looking for a new CISO in the first place is because your last one was fired/"retired"/left, or you didn't have one in the first place.

So now a new CISO joins, and they're responsible for reforming what is most likely a steaming pile of shit. Unfortunately, it can take years to move the needle even a little bit in terms of security maturity. So the new CISO might be 9 months into revamping the security program, but hackers just breached the network using an exploit that was left there by the last guy and that the CISO just hasn't gotten anywhere near being able to remediate yet. Or, maybe the hackers used a 0-day against the company that literally nobody even knew about until today. There was nothing the CISO could realistically do, but the C-suite is pissed, stockholders are pissed, customers are pissed, and someone's head needs to roll. So who gets fired (or is forced to "retire")? The CISO, of course. (incidentally, this means a new CISO has to be hired, which is going to significantly delay any security remediation efforts, which just means the company is exposed even longer).

I've personally seen this happen at multiple companies, and I've heard countless other similar stories from my colleagues. At my (very large) consulting firm, almost everyone I worked with that had >10 years experience had been offered a CISO position at one of our clients, but almost all refused it, because like I said, it's pretty much an inevitability that taking that job will just result in them being used as the sacrificial lamb in 12-24 months when they get hacked by something that's hardly even their fault. CISO is, based on what I've seen after years in the industry, not a coveted position whatsoever.

It's also funny that you mentioned the Target hack specifically, as the Target breach wasn't caused by anything I'd consider even remotely close to "incompetence". At the time, Target was actually known (at least among my colleagues) for having one of the best cybersec teams in corporate America. They did everything by the book, and the breach wasn't caused by any unpatched vulnerabilities or misconfigured systems. It was caused by something that nobody at the time even knew to watch out for. It was pretty much as close to "they really did do their best, but unfortunately their best wasn't good enough" as can be. And if you start jailing people for doing their best, or even firing them for it, pretty soon we just won't have anyone working in cybersec at all.

> I don't understand the scare quotes, but like I said (or tried to imply), bad things happening on someone's watch where best practices are established and not followed are criminally punished often enough to mention.

The "scare quotes" are specifically around the world "incompetence" because of what it implies. If you consider every CISO who didn't achieve absolute 100% breach prevention as "incompetent", then that would be nearly every single CISO in the country. In the cybersec industry, there's a famous quote by the director of the FBI: "there are two types of companies: those that have been hacked, and those that just don't know yet that they've been hacked". (again, it's a game of hot potato). If that's not what you mean by "incompetence", okay, fine, but who does get to decide the definition of incompetence? Who gets to draw the line between "incompetent" and "competent but just unlucky"? The public? A jury? A judge? Cybersecurity is hardly an area that even the most educated judges and juries understand. Nobody with half a head on their shoulders is going to take the risk of jail time just because an 80 year old technophobe judge couldn't wrap their head around who was truly at fault for a 0-day breach.


If companies know that data leaks are inevitable, then we need to shift our security models to be, "avoid storing anything on a corporate server that we don't need to store."

You're saying that incompetence is not the reason that breaches are happening. Fine. But breaches are still happening, so maybe Target shouldn't be storing credit cards at all. I have insecure things in my life that I know are insecure. I know that probably at some point in my life someone, somewhere will break into my car, so I don't store tons of money and jewelry inside it.

We know what the solution is to a data store that's fundamentally insecure and can't be fixed -- you limit the amount of sensitive data inside of it so that when it is hacked, the damages are mitigated. We already see this in many large companies with their email retention policies. Many companies take the view, "we don't know when we're going to be embroiled in some random, crazy legal battle. So we delete old emails whenever we're legally allowed to."

I understand where you're coming from with the perspective that incompetence is hard to measure, and to an extent, I agree with you. But the data breaches are just as damaging to consumers regardless of who's at fault. I have very little sympathy for a company that says, "we shouldn't suffer because of a breach because we couldn't reasonably prevent it." Consumers can't prevent breaches.

Companies should face an equivalent risk to consumers, so that they're incentivized to reduce that risk in any way they can. That doesn't need to be at the CISCO level, I'd prefer it be at the shareholder level.



I don't even think it's that, it's just been evolution through convenience. Corporate lobbyists say "hey don't create any pain around data leaks k?" and legislative aides say "that sounds cool, can someone pass the shrimp?"

If there's a Occam-friendly alternative explanation I'm all ears, but I've been looking for a while and I haven't found anything.


um, what you're describing sounds exactly like regulatory capture.


AIUI, it's the inverse: government assenting to industry requests. Perhaps a distinction without a difference, but to me regulatory capture necessarily involves the revolving door.


> no other explanation except that government and business are colluding against the people whose data is being leaked in order to normalize a lack of privacy

A bunch of different parties failed at a really hard thing, lots of times over many years, and some laws that are tough to write well haven't gotten written yet. Collusion most foul.

We need better laws with stricter fines, and the pressure has to come from the voters and from the subject-matter-experts, because you're absolutely right that it won't come from the corporations anytime soon.


To be sure, I believe there is a global business coup against the people going on these days, and they have enjoyed much success.

A bunch of different parties failed at a really hard thing, lots of times over many years

There's a concept in US law called the "attractive nuisance," and the fact that data leaks have happened "lots of times over many years" raises that charge to negligence if not recklessness.

https://en.wikipedia.org/wiki/Attractive_nuisance_doctrine


> I believe...

Suit yourself. The only thing scarier than believing it's all a big conspiracy, is admitting that no one anywhere is in control.

"A loose affiliation of millionaires, billionaires, and baby..."


Maybe you should have a legally protected right to lie to businesses that ask for your personal information.

This is interesting. I almost always fudge my birthday a bit when asked for it. If I'm applying for a loan or opening a bank account or dealing with the government, of course I provide the correct information. But it feels like a really bad idea to provide my actual birthday to strangers whether on the Internet or not.


As with all disinformation, eventually the truth is meaningless. In your example, eventually the wrong answers may gain credence as correct, and you’ll find yourself unable to “verify” your own birthday when you really need to! Still, I find the option of sowing disinformation a pragmatic and sensible decision in the face of relentless and dangerous incompetence without any accountability or consequence.


>and start investing government resources into a separate 2-factor system

Gov't issued Yubikeys or a new gov't issued "smart ID" with a 2-factor system as part of it?


> as complex as Equifax

I can only imagine that this point becomes lawsuits that span decades.


This is the first time I'd heard anything about expired certs, too.

It's absolutely ridiculous that any kind of a security team -- not to mention one that's supposedly safeguarding such an immense amount of data -- could let monitoring certificates expire at all. It's almost inconceivable that they could let seventy nine certificates that are required for security monitoring expire for months on-end.

I've worked in security for my whole career, and I'm firmly of the optimistic belief that someday, far fewer security-minded people will be needed to keep organizations secure. This is an excellent counterpoint to my argument, though: even the people who supposedly specialize in keeping things secure can apparently be absolutely clueless.


> It's almost inconceivable that they could let seventy nine certificates that are required for security monitoring expire for months on-end.

Even worse, the "seventy nine" figure was just the certs that were associated with "critical" systems. The total number of expired certs was at least 324.

> At the time of the breach, however, Equifax had allowed at least 324 of its SSL certificates to expire. Seventy-nine of the expired certificates were for devices monitoring highly business critical domains.

(from pg 70 of the report)


It's entirely possible they were monitoring cert validity, and that monitoring itself failed due to the expired certs :)

We don't really know what the certs were used for (eg, whether it was needed to decrypt traffic), or just as part of the reporting... And yes, the team is responsible for keeping them valid, but... this isn't the first time that cert expiration "broke" something and left us worse off than with no SSL at all.


Detecting data exfiltration has always seemed like an impossible problem to me. The implication is that "infiltration" has already occurred, and now the attacker just has to read/send the sensitive data somewhere. But if they have enough control to tell some system to dump its contents somewhere, how could they not have enough control to obfuscate the data itself?

It seems like these systems could only work by either using some heuristic like data volume to decide when something is being "exfiltrated", which isn't nearly as useful, or by whitelisting allowed communications, which seems absurd.

What am I missing?


Heuristics like data volume are a good starting point, but also known good traffic flow is important to know. In a "secure" environment you'd route all traffic through a know choke-point that is network controlled. In AWS this would be using the routing layer of the VPC to force traffic to something like a Squid proxy before allowing it out.

On that proxy outbound traffic destination host or IPs would be compared to a list of known destination, and rejected if not match is found.

These methods are not 100%, but they do help add another layer to the security process.


Whitelisting allowed outgoing targets isn't absurd at all. It should be standard practice for anyone running a system remotely like Equifax's. It's not the only type of protection, but if your business is basically a lump of data, you want to know exactly who you're communicating with.


Which just means you need to proxy your exfil with a host that has external traffic, such as anyone's laptop, or an edge server.

Of course, now this means you just need to detect that sort of proxying, which may be easier.

Detecting exfil sucks and is hard.


Sure, but that's at least an additional piece of security, requiring additional work to bypass.

I totally agree that 100% exfil detection is near impossible. But if you were going to try to get close to it, this is one of the world's databases where you really would want to. Their entire model is about collecting a shit ton of information and providing very heavily controlled access to it. Very well paid people should have spent a lot of time examining how to balance the need to pull in data from many sources, while providing only monetized ways to actually retrieve said data. They're like an old-timey pirate captain, who spends most of her time roaming all over the place in the hopes of being able to plunder someone else's treasure, then hiding said treasure in a safe place.

Except they really didn't take the hiding the treasure part very seriously. ;)


> Sure, but that's at least an additional piece of security, requiring additional work to bypass.

I would not consider the ability to proxy traffic much of a new capability for attackers. You can do it trivially with a single SSH command. Even a script kiddy with automated tooling should be able to handle this.

Agree that detecting exfil when you have sensitive data is very important though. Just that it takes a lot of work to protect against, and a ton of work to detect.


So you're going to whitelist every mortgage broker, car dealer, apartment manager, etc who might need to pull a credit report? Doesn't sound practical.


No, not like that. You have a delivery service for your credit reports, which can make outgoing connections as it pleases, that ensures every credit report it delivers is paid for. You invoke it from inside your network, sending it the credit report along with the transaction number. It looks up that transaction to figure out where it needs to deliver the credit report. It doesn't have access to read any of the raw data.

Won't be perfectly secure, but it diminishes a major area of risk.

(There are other architectures that will accomplish the same thing, the key is that if a machine can access the user databases, you should be drastically limiting what kind of outgoing connections it can make.)


> But if they have enough control to tell some system to dump its contents somewhere

This doesn't have to mean that they have full, unrestricted access. Access to data may mean that they could get raw records via some internal API. That API may still have logging, access control, DoS protection, etc.

Just like you can detect and prevent an external person from scraping your website in most cases, you can detect some internal service requesting way more than an average hourly number of records. Or with a better audit - specific verifiable entity (signed employee requests, external request id markers, etc.) requesting more than they should.

That assumes you have well designed system with multiple tiers and assume no trust. It's not possible if every service connects to the same database with yolo-root-access.


I get the impression that they did connect with yolo root access.

>Although the ACIS application required access to only three databases within the Equifax environment to perform its business function, the ACIS application was not segmented off from other, unrelated databases. As a result, the attackers used the application credentials to gain access to 48 unrelated databases outside of the ACIS environment.


This isn't a technology problem. This is an accountability/incentive problem.

If you cannot store data securely then you should not profit from that data. Business models that insecurely store consumer data should not be viable.


There are two approaches to detecting exfiltration that I consider state of the art:

* Network traffic capture for deferred analysis * Egress analysis (including decrypting SSL/HTTPS)

The first approach is, IMO, more realistic. Although you can't prevent the exfiltration, you can detect it lazily after it's happened, including where it went, what was sent, etc. Although I'm sure several companies support this, [Eastwind Networks](https://www.eastwindnetworks.com/) does a great job for Cloud and on-prem workloads.

The second option, while more thorough, requires big boxes to run, doesn't scale well, is hard to install on clients (the only way I know to "break" HTTPS is to install a custom CA on all clients and MITM all public traffic, which would break for sites using HSTS and HTTP Public Key Pinning). It's only vaguely feasible, and is fraught with issues.


My company utlizes Corvill appliances and network taps on the switches to monitor activity. We mostly use it for metrics, but no reason it couldn't monitor exfil. Onky problem is it only has enough storage for about 3-4 days of network traffic.


Ideally, the systems that prevent "infiltration" are separate from the systems that detect (and even prevent) "exfiltration". Just because you breached the firewall, that doesn't mean you can delete all the logs of network traffic passing through it.

It's of course much more difficult than that in practice, but that's the general idea.


Equifax cannot detect exfiltration of your data because distributing your data is their line of business. Your personal details pouring out of their servers at high speeds is what always happens there, 24 hours a day.


If my business is distributing data, the number one thing I would monitor for is that I'm getting paid for the data I distribute. Data that gets distributed for free is nearly the worst thing my software can do, so I should be investing a ton in preventing it.


> I'm a little disappointed in the final "conclusion" of the report, though. The end of the executive summary basically chalks the breach up to two things: "Equifax's IT management structure was complicated" and "Equifax uses legacy software that is hard to secure". These are valid points, but these are also issues that nearly every single major corporation in the world faces, and yet many of them still manage to prevent (or at least mitigate) major breaches. These aren't good enough reasons to explain why Equifax failed so spectacularly compared to every other bureaucratic company with legacy software.

Sure lots of companies can manage legacy software, arguably though Equifax's target on their head is substantially larger than most companies. They are the holy grail of personal data. Nothing should be legacy with them


Legacy systems are a red herring. It's just a way to shift blame to predecessors.


Everyone has legacy software. NASA has legacy software. But legacy isn’t an excuse to leave it unsecured or unmanaged.


Maybe that's how Veeger was created. The Voyager software was clearly not kept up to date with the vendor's critical updates. When the satellite was found, the alien hackers laughed out how out of date it was, and how an old 0-day could still be used. The hardest part was to decide to make it into a bot in an botnet used for intergalactic DDoS attacks.


I am just surprised why Equifax isn’t bankrupted. Even assuming nominal damage aggregated over next 10 years for class action lawsuit should bankrupted them. I feel corporate responsibility isn’t going to come by until these security breaches causes companies to go out of business.


"Equifax's IT management structure was complicated"

This is like a loss of primary containment at a nuclear facility and the writeup saying "nuclear plants are hard".

Color me flabbergasted.


Devil's Advocate here

But what were you expecting? That the people heavily sponsored by large corps like Equifax will write a nasty report against people that are basically paying their salaries?

Have you ever seen 911 report? After all the investigations, time and money spent, the report basically stated that.. there was a terrorist attack and buildings fell.

Long time ago I gave up on buying popcorn when high caliber political committees reports come out. Bottom line is, "this is America and this is business," and Equifax is in business of making billions of dollars, even if someone forgets to renew expired certificate, the caravan goes on.


Can someone explain why an expired certificate, on a monitoring device, would cause the device to completely fail rather than just spit out warnings about the certificate?


Page 34: The default setting for this device allowed web traffic to continue through to the ACIS system, even when the SSL certificate was expired. When this occurs, traffic flowing to and from the internet is not analyzed by the intrusion detection or prevention systems because these security tools cannot analyze encrypted traffic.


Spitballing here based on my experience in the field but without any specific knowledge about Equifax's situation:

Their monitoring system might have been using TLS to communicate the events to their aggregation tool, and when the cert expired, you don't really want log data with potentially confidential/critical security information traversing insecure channels, so they may have had it configured to not send any data if the cert wasn't valid.

As for warnings about the cert, it's possible they (stupidly) configured it to not send warnings, or maybe it was sending warnings but nobody was paying attention. I've seen situations before where such warnings were set to go to XYZ person's mailbox, but XYZ person leaves the company and nobody remembered to update the destination address for the alerts.


My guess is that they were using private certificates to read encrypted data, without which they couldn't inspect their traffic.


The exfiltration attempt would have likely been triggered on volume alone. Possibly it was simply a service that could not longer connect due to a lapsed cert trying to raise the alarm.


(speculation) It could be that the expired certs were on the ingestion side. The monitoring agents tried to report something to the central place, but failed due to expired cert on the TLS connection. No other monitoring picked up that new data is not incoming.


Uniformed speculation, but I imagine the monitoring device sends warnings and such to a central SIEM/log server (like splunk) for analysis and correlation across multiple devices. That channel is likely web server calls or json posts, over https.


Maybe the cert on the web server was updated, but the private key for the updated cert was never copied to the SSL Visibility Appliance?


> Recommendation 6: Reduce Use of Social Security Numbers as Personal Identifiers The executive branch should work with the private sector to reduce reliance on Social Security numbers.

I'm disappointed this is recommendation 6, but at least it is in there. I'm also disappointed that they suggest the executive fix this problem instead of legislating a solution. Hopefully they take some action on their own recommendation!


I would support a government initiative that deprecated social security numbers entirely.

Come up with an alternative plan, give businesses some reasonable number of years to stop relying on them, and after that point they'll no longer be issued to anyone new who's born.

Or if you don't want to get rid of them, treat them like public information. Give businesses X years, and then say, "past this point, we're just going to make these searchable by anyone who makes a request." They're an ID number, not a password.


Yeah there should be some sort of government sponsored cryptographic identity system :\

I hope some of the identity verification blockchain companies succeed for this sort of thing.

What we have now is laughably bad.


Government and distributed ledger seem diametric. Either the government creates a centralized identity system and is the sole authority, or, a distributed system is used to grant identity numbers. Half measures between seem more problematic. Personally, I’m fine with the government being the single source of truth for that stuff, it is kind of their raison d'être.


The executive branch is the proper solution.

If Congress mandates a solution, then it'd be like the VHS stuff all over again. Congress writes a thingy about VHS in the 1980s, and its completely irrelevant 10 years later. (If a law states that something with VHS is done a certain way, will it apply to DVDs or BluRays when they are invented 10 years later? Or to streaming media 20 years later??)

The Executive Branch is the one that actually runs the government. Legislative Branch / Congress sets policies, but shouldn't set solutions. Law goes out of date incredibly quickly.

Ex: If Congress says that RSA Tokens are to be used instead of SSNs, what happens if a better invention (ex: Google Titan) comes out? Furthermore, even if Congress writes a certain policy down (ex: Two Factor Authentication is necessary to protect bank accounts), the Executive Branch is still the ones who enforce the matter.

So in the case of Two Factor Authentication (legal requirement of banks to protect your bank account), the Executive Branch says that "3-personal questions + Password" counts as two-factor security in the USA. And that's why you have so many banks implementing "3-secret questions".

------------

So regardless, the job will come down to the Executive Branch.


The legislative solution would be to mandate a national ID with a cryptographic token, and then leave it up to the executive to define the implementation.

Right now, even if they wanted to, the executive can't force a national ID.


> The legislative solution would be to mandate a national ID with a cryptographic token, and then leave it up to the executive to define the implementation.

That's still too specific. "national ID with a cryptographic token" means that OpenID / Paypal-based single-sign on logins are illegal.

Yeah, writing laws is hard as heck, and Congress is NOT the experts in the field of cryptography / login security. So Congress will get it wrong if they even tried to write the law in that manner. Executive Branch CAN hire experts (ie: 18f, NIST, etc. etc.) to define best practices.

As such, the proper legislative solution would be to mandate "industry best practices of identity protection, as defined by (Insert Agency Here, maybe NIST)".


> "national ID with a cryptographic token" means that OpenID / Paypal-based single-sign on logins are illegal.

Why? Why doesn't it just mean that there's a canonical government system that doesn't touch PayPal or OpenID that can be used for stuff that would currently be tied to your social security number?


PayPal touches your SSN, as it is tied to your bank account. (Bank accounts require a "Individual Taxpayer Identification Number", which is the SSN for any citizen. Non-citizens get a different ID, but really its the TIN that is the cause of all of these security issues). So yes, really, the proposed wording would probably make Paypal illegal.

You have a point on OpenID, but my overall point is that OpenID would probably be sufficient for most financial transactions on today's internet. OpenID is basically equivalent to Paypal's security model.

Neither Paypal nor OpenID require 2-factor or security tokens. I think they're an optional feature. But in any case, the Paypal / OpenID model of identification (Paypal controlled website verifies password, and sends a token to the 3rd party website) would be sufficient for today's security.

----------------

Of course, none of this even touches upon Equifax. Equifax uses SSNs to identify people, and often without their knowledge or consent. Equifax is a service to the financial industry, to help keep tabs on individual's history.

So the Paypal model will NOT work with Equifax's use case, because the individuals don't always know when they are being tracked. Maybe it'd have to be something to OAuth's model.

In any case, I'm not an expert either. I just appreciate the difficulty of this problem.


Social security isn't a piece of technology, it's a government program. How is it in any way like VHS?


Whatever will replace SSNs will be a piece of technology. Ideally, something that uses public/private key encryption.

There's a lot of things that use public/private keys however, or security tokens, or whatnot. Should it be a smartphone app? A hardware dongle? Etc. etc. If a hardware dongle, which one?

As such, its the Executive Branch's job to research the various technologies, and implement a new standard to solve the online identity problem.

----------

For example, 18f (White House's crack website team) has the following: https://login.gov/

Github code here: https://github.com/18F/identity-idp

If single-sign on were widely deployed across US Agencies (and tied to financial services / private sector banks), we'd be in a way better place.

In any case, this is clearly the realm of the Executive Branch. Specifically 18f probably should continue to lead the effort, as they have been.


All Congress has to do is pass a law saying "private companies are not allowed to store SSNs" and let those private companies figure out what the replacement is. Congress doesn't need to legislate the replacement technology.


That fundamentally misunderstands the problem.

SSNs are fine and useful. They just shouldn't be the "password" to financial systems. When every damn bank uses "Whats your pin and SSN" to gain access to an account... that's the problem.

The issue is that private companies use SSNs for security. There's nothing wrong in using SSNs as an identifier.

The USA needs to start assuming that SSNs are public information, and to build security through other means. SSNs were never a secret number to be used for authentication / authorization purposes.


> There's nothing wrong in using SSNs as an identifier.

There's plenty wrong with this. SSNs were already a pretty bad identifier when they were invented and didn't get better.

There's no check digit, so if you make any mistake at all it's probably a different person's valid SSN.

Until relatively recently they were assigned both chronologically and geographically, so one digit error in SSN plus Address plus date of birth... Causes no red flags, it looks about right. Modern ones are randomly assigned, but it'll be decades before that's most people.

Like most similar systems the US SSN also very explicitly says it isn't suitable for any purpose except numbering Social Security recipients. People you want numbers for may or may not have SSNs. Don't worry though, since they all look pretty similar and there's no check digit the people who don't have one long learned to give a bogus SSN, probably belonging to somebody else. Somebody you know probably does this today. What a useful "identifier".


What you say is true, but is entirely tangential to the topic of Equifax breaches.

> Don't worry though, since they all look pretty similar and there's no check digit the people who don't have one long learned to give a bogus SSN, probably belonging to somebody else.

Its trivial to use US Government sources to do a name-check on the SSN, to ensure that you've got the right person.

https://www.ssa.gov/employer/ssnv.htm

https://www.ssa.gov/cbsv/

The latter costs money, but not much money in terms of doing business.

---------

The fact remains: People use SSNs because the Social Security Administration is very good about keeping SSNs in order. Perhaps the number could be designed a bit better, but the system has been in use since the 1930s and there's no better number to use in the USA in most cases.


> In 2005, former Equifax Chief Executive Officer (CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks.

> Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custombuilt legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging. Equifax recognized the inherent security risks of operating legacy IT systems because Equifax had begun a legacy infrastructure modernization effort. This effort, however, came too late to prevent the breach.

As someone who works in Tech M&A, I often tell clients "hackers go after the weakest link and you just acquired a new link". They nearly unilaterally ignore this advice and ignored hardening even the smallest of acquisitions, because well, "growth". Someday people will learn.


If you start with the accounting assumptions that work on security is a cost, and storing of private data is not a liability then all the actions are rational within the narrow scope of profit optimization. (I'm not in agreement of this, but it seems to be true). Equifax holds these assumptions even more strongly because their fundamental existence is founded on monetization of private data.


This afteraction also validates breaches are not a liability, in the end their sharevalue recovered nearly immediately.


Data is always a liability unless it is the core component which drives value in your business (ie you are Google or Facebook).


> Someday people will learn.

Not convinced of this at all. There just isn't sufficient financial incentive.


If you are small and have no government support you have to learn in order to survive. If you are monolith like Equifax and you only have to explain yourself to people who are openly soliciting campaign contributions (read: bribes) then learning is a waste of time when you can just buy off the politicians and buy out the little guys who are doing things better.


Well at least the i-bankers who managed those acquisitions will be punished in a way that updates their incentive function for next time. /s


... unfortunately I don't think people are going to learn this anytime soon.


Just want to talk about Recommendation 6 (Recommendation 6: Reduce Use of Social Security Numbers as Personal Identifiers). Page 96.

The recommendation is essentially "Try to convince the public and private sector to use them less." But I'd argue it is well passed time that SSNs be replaced by something fit for purpose. SSNs were never designed to be a unique form of ID, and using things like the cardboard card as further verification is almost comical.

I'd like to see an aggressive alternative that uses the best of our security knowledge and then have it vetted by everyone in the security industry with a pulse. We've seen other countries try this. But most of those countries outsource it to the lowest government bidder, who hide the inner workings behind proprietary claims, and never vet the resulting proposal.

Instead we need something more akin to the United States Digital Service, a publically created proposal (fully released specs) that is vetted by every academic and security expert they can find.

The hardest part will be saying "no" to requirements creep. Allow certain government agencies to continue to use SSNs for now, and have the new ID "flip" into an SSN behind the scenes. Better than needing five hundred different departments to adopt the new standard before it can go live.


> Try to convince the public and private sector to use them less.

It's gov, so they can do more. It can also be "By 202x using SSN for identification in non-SS purposes becomes illegal."


Why is Equifax in business still? I don't get it.

A company like Bear Sterns got "killed", Enron and others got litigated out. But it looks like Equifax did not face any consequences. Its high time we treat data as an asset class and regulate accordingly. Particularly personal information is acquired by every company and is treated as a valuable commodity. Companies get acquired purely for the amoutn of data they have. The market has already declared it as an asset why is it not regulated?


Bear Stearns failed because it was overextended on subprime mortgages and lost the ability to conduct business when the cash dried up.

Enron was convicted of massive, deliberate accounting fraud. The company wasn't viable without the fraud.

Not even remotely similar to the Equifax breach.


Dissimilar from Enron only in that Equifax was never convicted for its fairly deliberate negligence of the public interest, or for its massively distributed tort, because the civil case is infeasible, and because no law or regulation is adequate to cover what ought to be criminal case -- this I think is yalogin's point.

The security patch here is not a technical solution. It's to wipe out the stockholders of any company this negligent, and repossess the spare homes of the entire C-suite. That should cover most attack vectors pretty reliably.


> its fairly deliberate negligence of the public interest, or for its massively distributed tort

If anything, the report says the opposite of that.

> It's to wipe out the stockholders of any company this negligent, and repossess the spare homes of the entire C-suite.

This is why we don't get any real reform...citizens have these insane ideas on how to "fix" the problem. How can the government take you seriously?


> the report says the opposite

My comment was not aimed at agreeing with the majority report. Would you dispute my characterization of those expired certs as "negligence"? Or perhaps you don't consider the budget meetings that declared how low cybersecurity fell on the list of priorities as "fairly deliberate"?

> insane ideas

Hmm, maybe you're right. Monetary penalties for undesirable behavior is insane. Let's repeal all tort laws and cancel criminal fines, and make no attempt to levy proportional damages for anything. Let the C-suite keep their options!

> How can the government take you seriously?

If all else fails, a yellow vest might help. I'm hoping it doesn't come to that here.


> Would you dispute my characterization of those expired certs as "negligence"?

I would. Does the report indicate that the cert was expired due to budget issues? Does it show that cybersecurity was low priority? Again, the report shows the opposite: they had solid policies and procedures in place, but the failures seem to be in execution and training.

> Monetary penalties for undesirable behavior is insane.

No, that's just fine. And Equifax took a huge stock hit, had to make a potentially cash generating service free for everyone, has to earn back its reputation, etc.

> Let's repeal all tort laws and cancel criminal fines, and make no attempt to levy proportional damages for anything. Let the C-suite keep their options!

Show the criminal activity that resulted in the breach (by Equifax, not the attackers). Show the damages that have resulted from the breach.

> If all else fails, a yellow vest might help. I'm hoping it doesn't come to that here.

If it does, it won't be over the Equifax breach (or even the other 4 larger breaches: https://abcnews.go.com/Technology/marriotts-data-breach-larg...).


> Does the report indicate...

> Does it show...

> the report shows...

I'm not asking you whether the House Republicans would call the expired certs negligent, or the security underprioritized. I'm asking you, as the reader of a technical forum. Do you believe everything the government tells you? Or here, what one party tells you? At the very least, read the minority report linked elsewhere in this thread, and interpolate.

> huge stock hit

Nowhere near huge enough, and it rebounded once it was clear there were basically no consequences.

> has to earn back its reputation

Why? Equifax could care less what its unwilling inventory thinks of it.

> Show the criminal activity...

Name one victim whose damages are large enough to merit hiring a lawyer to comb all the contracts to identify the line where a bank promised that Equifax promised that the victim's data would be safe, and I'll show you a breach of contract. Hint: it's a different line for each victim.

> Show the damages...

Precisely my point: the damages to the public are "massively distributed" and infeasible to pin down, which is why there should be a criminal penalty, like we do for say, ozone-depleting chemicals.

> it won't be over the Equifax breach

(sigh) you're probably right.


> I'm not asking you whether the House Republicans would call the expired certs negligent, or the security underprioritized. I'm asking you, as the reader of a technical forum.

No, that's not negligent. Nor was security "underprioritized". If you read the reporting on the issue (whether the government report or just reputable news), it show solid processes and procedures, but there were implementation flaws. There is no amount of security that will ever be perfect.

> Do you believe everything the government tells you? Or here, what one party tells you? At the very least, read the minority report linked elsewhere in this thread, and interpolate.

Considering that the report matches all the other reputable reporting on the issue? Yes, I believe it. Equifax is claiming "factual errors", but I'd have to see their response before commenting on that.

> Nowhere near huge enough

That's your opinion, but the market has decided otherwise.

> Equifax could care less what its unwilling inventory thinks of it.

Correct. I was talking about their customers...the businesses that supply and use their data and services.

> Name one victim whose damages are large enough to merit hiring a lawyer to comb all the contracts to identify the line where a bank promised that Equifax promised that the victim's data would be safe, and I'll show you a breach of contract. Hint: it's a different line for each victim.

Once you do that, prove that the information came from the Equifax breach, and not one of the dozens of other breaches, let alone the 4 other large breaches I cited.

> the damages to the public are "massively distributed" and infeasible to pin down, which is why there should be a criminal penalty, like we do for say, ozone-depleting chemicals.

Or their negligible and impossible to detect at any meaningful level.

> (sigh) you're probably right.

It's troubling that you seem to think violence is some sort of solution to any problem, let alone this problem.


> the market has decided

Like do you understand what an externality is? When I say it's not huge enough, it's pretty clear that I don't mean Equifax is overvalued & people should all go out and short it. The lawmakers and regulators have decided that the externality will have no significant consequences. The market hardly decides anything here; it reacts.

My reading of the reports is that this company was roughly in line with common practices. My point is that common practices must change.


> The lawmakers and regulators have decided that the externality will have no significant consequences.

No, they just haven't finished yet. There's already been some new regulation, and I believe more to come, for the credit industry as a whole.

> The market hardly decides anything here; it reacts.

If Equifax's customers decided that they couldn't safely do business with it, then Equifax would cease to exist.

> My reading of the reports is that this company was roughly in line with common practices. My point is that common practices must change.

I certainly agree there, wholeheartedly. But you seem to be singling out Equifax, when it's an industry wide problem.


Is Equifax viable if they actually secure their data? (Security costs, and it slows down operations, with leaves some profits on the table.)


Bears Sterns was acquired by JP Morgan when it was under financial duress.

Enron went out of business due to losing an enormous amount of money.

Neither of these things had much to do with government regulation.


The analogy was that Equifax lost all of its assets and then some. It failed in the only job it had, protecting users' data, its only asset. It lost credibility and could be argued that its name became toxic. Its very similar to Bear Sterns and Enron, IMO


They didn't lose them. They still have them. They just let someone else make a copy. That didn't make the copy that Equifax still has any less valuable.

Keeping that data secret isn't the only job it has. In fact that isn't even really their primary job. Their primary job is evaluating the credit worthiness of potential customers for equifax clients. As far as I know, they are still doing that job quite well.


It didn't "lose" anything, nor is "protecting users' data" it's only job, nor is that data their only asset. It certainly lost credibility, but the name isn't toxic.

Again, Bear Stearns was insolvent due to being overextended, and Enron was propped up by massive accounting fraud (like WorldCom). Equifax, on the other hand, was the victim of a cyberattack.


> its name became toxic

What tends to matter is whether your name is toxic to your customers, not whether your name is toxic to your unwilling inventory.


You could argue that deregulation played a significant role in the fiascos that brought down both of those companies.


Nobody pays Equifax to protect data. They pay Equifax for access to data, which Equifax has been able to keep doing.


There is so much to comment on and digest in this report, but the lifecycle of an attack diagram[1] on page 31 (figure 164) is something every software developer should burn in to memory.

It is easy to fall in the trap of seeing the most miniscule of vulnerabilities and dismissing it as "no one could ever possibly utilize that as a vector, it's not critical."

But that miniscule vulnerability becomes a single link in a ladder to everything in the system. Every seemingly-small vulnerability matters, like this painfully shows.

[1] referenced here: https://blog.hellobloom.io/how-hard-was-the-equifax-hack-a3b...


That's something you keep seeing as a pattern in many hacks. It is rarely just one mistake, usually a chain of small ones, each of which doesn't look all that bad by itself.


That's why technologies like SIEMs exist. No number of humans could look at logs across all the various systems and spot anomalies within them, but a SIEM can. But only if it's turned on, ingesting data, running a useful ruleset, and crucially: only if someone is watching the output.


The report linked here is the House Oversight Committee (Majority) Staff Report.

Another report from the committee's minority is also available.

https://democrats-oversight.house.gov/sites/democrats.oversi... Minority Report - FINAL 12-10-2018.pdf


Link not working for me. Try:

https://democrats-oversight.house.gov/sites/democrats.oversi...

Key recommendations from the minority report:

"Based on the investigation conducted by the Committees, four key legislative reforms proposed by Democrats would help prevent future cyberattacks:

[A] hold federal financial regulatory agencies accountable for their consumer protection oversight responsibilities;

[B] require federal contractors to comply with established cybersecurity standards and guidance from the National Institute of Standards and Technology (NIST);

[C] establish high standards for how data breach victims should be notified;

[D] and strengthen the ability of the Federal Trade Commission (FTC) to levy civil penalties for private sector violations of consumer data security requirements."

On [B], they note that "Equifax was a federal contractor at the time of its data breach".

On [D], they note that "In the three years before the Equifax data breach, the company spent only about 3% of its operating revenue on cybersecurity—less than the company spent on stock dividends...Civil penalties would incentivize private sector companies to prioritize and invest in continually upgrading and deploying modernized IT solutions and applying cybersecurity best practices."


The report doesn't appear to mention that you could just login to their web portal with an obvious password [1]. It also doesn't appear to be under the purview to look at the leadership team selling stock[2]. Both of which it should consider when reviewing the competency and ethics of an organization managing and profiling nearly everyone in the U.S.

Speaking of which... why is it only ~50% of the adult population in the U.S.?

If the intruders were going around the Equifax network at will (which from the report it appears they were). We should assume 100% of the data was breached.

[1] https://www.cnbc.com/2017/09/14/equifax-used-admin-for-the-l...

[2] https://www.bloomberg.com/news/articles/2018-03-14/sec-says-...


The report doesn't appear to mention that you could just login to their web portal with an obvious password [1].

That was a different portal and a different breach.


You're making the large assumption that Equifax runs a single, global, interconnected network.


The most alarming part of this is that it appears that the intrusion was only discovered when the new SSL monitoring certificates were being checked to ensure that the appliance was again "on". I wonder how long it'd have taken if someone hadn't spotted something suspicious by accident at that point - I'm sure we've all spotted bugs or flaws by accident when testing a completely different feature.


>The Equifax data breach and federal customers’ use of Equifax identity validation services highlight the need for the federal government to be vigilant in mitigating cybersecurity risk in federal acquisition. The Office of Management and Budget (OMB) should continue efforts to develop a clear set of requirements for federal contractors to address increasing cybersecurity risks, particularly as it relates to handling of PII. There should be a government wide framework of cybersecurity and data security risk based requirements.

From my understanding of FEDRAMP, all of the things that Equifax failed to do should be already covered. Software patching, isolation of data, audit trails etc. etc. Seems more like a massive auditing fail.


Reading through the timeline, it shows that at every step Equifax was trying to do the right thing...internally publishing vulnerabilities, applying patches, scanning for vulnerabilities, etc. The policies and procedures were good.

One difference is that previously Equifax claimed a single employee failed to scan and patch a system. I don't see a reference to that in the report. All I see now is that someone scanned a system improperly:

> The scan did not identify any components utilizing an affected version of Apache Struts. Interim CSO Russ Ayres stated the scan missed identifying the vulnerability because the scan was run on the root directory, not the subdirectory where the Apache Struts was listed.


Check out how Equifax started rolling heads starting on page 50. They pinned the fact of not patching Struts on a SVP who was one of hundreds of people notified of the need to patch Struts. But he didn't forward that email, so he's toast!

Now pardon me while I go route my patch management procedures through the nearest baffling and inane dependency.

A senior Equifax official was terminated for failing to forward an email – an action he was not directed to do – the day before former CEO Richard Smith testified in front of Congress. This type of public relations-motivated maneuver seems gratuitous against the back drop of all the facts


> Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed.

1970s? Am I reading that right? HTML wasn't even developed yet.


Most likely they had a web front end that talks to the legacy system. Very common in big companies.

It was probably developed very quickly, possibly outsourced, and just stuck in front of the older system with minimal re-engineering.

Many years ago, I worked on a system that put an X Windows front end in front of a mainframe app that used a 3270 emulator to interact with parts of the legacy app. I imagine this is somewhat similar.


Yes. ACIS is still a terminal-based mainframe application, but it's now fronted by a bunch of shitty Java apps that function as its API.


Anyone got any stories of being affected by this breach?


The problem is identifying which breach caused your problem. Often, criminals use data from multiple breaches. Inaction on data security is actually lowering the liability for companies and pushing more responsibility onto individuals.


I waited in trepidation for the value of my index funds' shares in Equifax to go to zero, but it didn't end up happening. False alarm. Don't know why I got so worked up.


> Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic.

Ouch.


What does the baseline of "good enough security" look like? For physical banks it looks like money stored in vaults with no staff access, cash in transit stored with market dye, etc etc

For the non physical world I have some ideas

- The entire infrastructure of IT can be rebuilt in an automated fashion and is done so in a prod-parallel equivalent at least weekly

- Any chnage to "vital" files on any server is audited

- err?


This is the equivalent to "We offer our thoughts and prayers" after a mass-shooting.


>>"Equifax's IT management structure was complicated" and "Equifax uses legacy software that is hard to secure"

I feel for them (not!). BUT they shouldn't store any valuable data then. They should be not-insurable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: