> One possible explanation, according to several veteran security experts consulted by Bloomberg, is that the investigation didn’t uncover evidence that data was accessed. Most data breach disclosure laws kick in only once there’s evidence that sensitive personal identifying information like social security numbers and birth dates have been taken.
There was one company (very well known) I know of that was breached, but their logging and general security infrastructure was so poor that they had no direct evidence that customer info was breached, so they didn't have to report the hack. They only found the intrusion due to excessive load the intruders caused on some services.
Customer info was certainly accessed (the attackers where everywhere), it's just there was no record of it as the records they kept where so few and far between.
Part of me thinks it's a pretty clever workaround to such laws.
I worked on a breach where during the investigation we found reverse tunnels out of the network (back to the VP's house) where he'd been working on stuff unbeknownst to the firm. They had a VPN to a competitor that a sales engineer had setup when they were exploring collaboration, that had never been closed. Their DMZ was awesome on paper (cool color chart) but was literally non-existent. The final icing on the cake, their network equipment and firewalls were all second hand with existing firewall rules left when new ones were added. After printing out the rules.. at the very bottom ANY/ANY. There wasn't a lack of money, hundreds of millions were dumped in by investors. Everyone there had the i'm an engineer so I don't have to follow rules, that's for the sales guys attitude and that more than did them in from a security perspective.
Sure, that combined with the fact that courts simply refuse to refer to anything involving software a 'negligence', no matter how extravagantly negligent it might be, would make that a pretty good strategy. The only flaw in it, really, is that it would also open you up to things like the Sony hack which actually had them shut down operations for awhile. As much as companies REALLY do not want to ever admit it, no matter what they do, their software IS their business in every sense. Without it in a working state, they have to shut their doors. So I suppose you have to strike a balance... secure enough that things keep working, but not so secure that it actually costs money...
Maybe this is the unintended benefit of companies like domino's claiming they're a "tech business that sells pizza". I'd love to see that language used in an agument that the company has made tech it's core competency and that lowers the bar for negligence.
I imagine it would depend on the nature of negligence: were they to leak customer data, they would be treated as a tech business; were they to poison customers, they would be treated as a pizza business.
Between this and them lobbying for no-fault for data exfiltration, it keeps getting better and better. My credit has been locked since the OPM fiasco, I'd like to recommend others consider it.
If a company finds a breach of sensitive customer data, they need to fix the breach, not wait until they discover a ton of data was lifted... otherwise this seems like prime stupidity, and the fact that they instead just tried to cover it up until they couldn't anymore shows deceit and untrustworthiness.
Assuming this service held some type of protected information (i.e., credit card data) I'd assume that such conduct would constitute a negligence of some variety.
I do performance / app triage work, but see the same thing. Often I walk in to a supposed "emergency" only to discover the problem has been occurring for months, if not years. Often, there is a significant cost (IE: in the millions) but either the organization isn't willing to remediate, or isn't even aware of the full scope of the cost (IE: "It's not my budget so I don't care"). In at least one case, I came across a security problem where the response was "oh yeah, we've known about that for years". Sigh. Sadly, too often unless companies have a very large customer who gets angry with them, or they are publicly shamed for a problem, they just let it magically go.
I came across a "hole" in the design of a vendor I was evaluating. Their fancy Java UI actually just downloaded plaintext root credentials to their MySQL database. All security was client side. As a bonus the root credentials were debug logged to the user's local computer.
Making it worse, they actively sold this as a multi-tenant platform to be used with mutually untrusting parties.
When I met with engineering and started to explain, they started smiling and said "this is a known issue and we're going to fix it in our next version."
Quite some time later I ran across people using it in the wild and they had not passed a lot of the glaring holes. Even their newer version had a hidden input field on the edit profile page named "IsAdmin". This did exactly what you think.
They ended up having a successful exit as far as I know and I've never heard anyone speak ill of them security-wise.
Telecom is a mess. These holes are easily exploitable for direct profit. But there's so much more low-hanging fruit, I don't think people bother.
This stuff is really the ultimate technical debt. Fixing it seems to provide zero benefit today. But there is a significant chance that one day it's an extinction event (or just a billion-dollar blunder, if you're big enough to survive it).
Risk v. reward I'm afraid. The Sarbanes-Oxley legislation attempted to put skin in the game for the C execs in public companies; so, I cannot help but notice how many public companies delisted, went private after that. The whole of the corporate charter was meant to insulate investors' personal wealth against risks. I guess this is where licensing can provide a backstop to poor development practices, but it seems to have not really caught on.
IMPORTANT: The date of disclosure is ALSO the date that demand for hacked data explodes.
It's good practice to have a staged-disclosure procedure for leaks of this nature.
For example: your bank should be told to start fine-tuning its anti-fraud capabilities BEFORE the entire world is made aware that you can be defrauded in this particular manner.
That's really interesting. We noticed a statistically significant number of declines on debit/credit cards roughly 1 week before Equifax publicly disclosed.
Here's a question. What if based on this hack and others, someone decides to publicly post all the details they are aware of. So basically I can go on a website and look up all the hacked SSNs and the person and their information associated with that. How will the US cope with that?
The reason I ask that question is that it's definitely not gonna happen. But it's arguably a lot better than the situation right now where we have a few malicious actors who do have that information. If the data was completely public I feel you'd have a huge effort to fix the problem bwcahwe your neighbor can look up your credit worthiness. Yet I think the situation right now is worse because we won't have that effort to fix the problem yet 95% of the people who would have caused you problems have that data.
What would I do with that though? As a resident of a country that's affected, if I hit a profile that happens to have fraud protection, I go to jail. Now, if I was in a country not so friendly to the US I might have a shot.
If they had an outside security firm helping them starting in March, and another breach in July, that doesn't say much for the capabilities and competency of the security firm.
Not 'helping them', just diagnosing a breach. Part of their due diligence to make sure they don't run afoul of the reporting laws. Once it was determined they didn't have proof of data leaving the network, I imagine they booted that security company immediately. They're not going to spend money to improve their security just because a non-reportable breach occurred. They don't give a shit about your data.
A lot of people want you to come in and find a quick answer and fix, rarely allowing a full proper investigation. Many times they're adverse to spending money and want to cut corners where they can. It's actually disheartening. Much like one of the posters above, I've seen people purposely stop investigations because if the investigation reported on known issues it would open up more questions about other wrong doings.
The Irony is their actions on remediation are almost exactly in line with the decisions made that often times lead to the incident. It's cyclical.
I suppose the difference is that in accounting, any irregularities or shennanigans are quantifiable in dollars. Security breaches maybe sometimes are, but often are not. I'm guessing there nobody able to prove that his or her identity was stolen as a result of this breach, to say nothing of being able to specify a dollar amount of loss that can be backed up.
With often vague or only theoretical damages, it's harder to muster support for draconian consequences.
Also people can sort of understand accounting. Dollars and cents and balances are something most people can comprehend. Computer software and security breaches, on the other hand, are much more of a black box for most people. They can't intuitively understand what's sensisible and reasonable and what would constitute negligence when it comes to protecting software sytems and data, other than by relying on what other people tell them.
Hard to imagine a competent security review would not point out "Your entire database is accessible directly from the web server and you have no verified plan or demonstrated capability to deploy security patches to the web server within a reasonable timeframe"
To red team that line of reasoning: hold on to the hacked data but don't use it, wait till public forgets about it, don't give business/lawmakers a reason to neuter the data, then strike a few years down the line.
It's a response to people with crass, Reddit-like usernames, like BloodyDickFart. Sound comment or not, I'd rather not engage in that type of conversation. I don't expect people to use their real names, a username is fine, I said novelty names.
Quotes can have multiple meanings, some of which oppose each other.
Generally one is expected to use context clues to disseminate which meaning is most relevant to a given sample of text.
In this case they were to indicate a somewhat mocking tone in my paraphrasing.
The exact quote actually managed to be more condescending than that:
>
Commentors with novelty usernames should not expect responses.
On a totally unrelated note, you sound like a wonderfully pleasant person to interact with. I am deeply saddened those of us with "novelty usernames" might miss out on that.
The size of the sells were truly negligible for all executives involved, in proportion to how many shares they have and routinely liquidate. The same argument that there were any sells at all would been made regardless of the number of shares.
A company that size will always have material non public information.
Equifax's OPSEC was horrible all along and a gigantic leak was bound to happen, so the extent and ramifications of this fairly routine breach were eeeeehhhh not considered.
> Cons for not prosecuting:
The execs did it on the same day.
Something something the people something want blood.
If I were a shareholder in Equifax, I would probably be looking very closely at what those executives are paid. They adamantly insist that they had no idea about the most significant thing to happen at their company in years. Clearly, they're not just lazy, they have to be aggressively, emphatically incompetent.
Right, and these execs did that as well. I mentioned that in the "routinely liquidate" part, but I guess it wasn't clear, they do liquidate shares on a schedule.
Sure, doing anything out of that schedule is always a risk, and doing things on that schedule doesn't mean there isn't insider trading still happening. A successful prosecution under these equities-specific market sanctions will rely on more than that.
This emphasizes why companies need to post bonds in case of a breach. They shouldn't have to be sued after the fact. Security should be priced in up front so they have massive financial incentive from the get go to protect confidential government data.
The headline is ambiguous, but it's not five months before they disclosed, it's five months before the July date they originally disclosed they were hacked.
> In a statement, the company said the March breach was not related to the hack that exposed the personal and financial data on 143 million U.S. consumers, but one of the people said the breaches involve the same intruders.
There was one company (very well known) I know of that was breached, but their logging and general security infrastructure was so poor that they had no direct evidence that customer info was breached, so they didn't have to report the hack. They only found the intrusion due to excessive load the intruders caused on some services.
Customer info was certainly accessed (the attackers where everywhere), it's just there was no record of it as the records they kept where so few and far between.
Part of me thinks it's a pretty clever workaround to such laws.