So it's perfectly reasonable for them to say they take security seriously and then have these breaches. They take it exactly as seriously as needed to keep their business alive.
Having been on that eBay security team mentioned, I can tell you that we did in fact take security seriously. And sometimes what we recommended was done, and sometimes it was decided the cost wasn't worth the risk. But we were always serious about it.
Given that the listed businesses are still operating and in most cases continuing to turn similar profits to what they were doing before, they appear to have done an excellent job weighing the costs in play.
You're welcome to think that the cost to the business should be higher, either due to the public caring more (most paying customers at the moment don't even consider whether the businesses they support care about security) or due to increased fines and penalties for security failings. It may very well be that we need to adjust those things via raising awareness or changing legislation. But based on the current state of the world, these businesses all weighed things in a way that kept them doing what they do.
These aren't statements made to the board, they're PR statements made to outsiders after huge breaches of customer/employee personal info. No one is reassured in these times by a company being "serious" about security in the sense that they have supposedly calculated its expected impact on their profitability.
> Given that the listed businesses are still operating and in most cases continuing to turn similar profits to what they were doing before, they appear to have done an excellent job weighing the costs in play.
You can't know whether (even in an internal bookkeeping sense) the "tradeoff" was net positive. The incidents still cost the companies money.
BP still exists and is highly profitable despite the spill and the ongoing costs incurred. That doesn't mean, hey they must have given it serious thought and found it costs more to inspect and maintain the rig than the $50B+ spent cleaning it up.
I don't think security can be taken seriously unless specifically compliance is taken seriously.
I claim something that few security professionals would say in their career: Compliance is the magic bullet to taking privacy and security seriously. But to understand why, we have to think about the the most reasonable alternative to compliance as an approach to security.
Do you know what “full content packet capture” means? It’s when you’re able to grab every piece of data transmitted over a network using a tool called tcpdump. You can use another tool to reassemble those collected data packets into complete applications, movie or music files, even video chats and phone calls.
As a joke, an instructor once told me that when they went onsite to do security investigations, they would do a full content dump; if someone was downloading videos at the time, my instructor would say, “Thanks for the movie, guys!”
This is what the Utah Data Center is doing. All packets are collected using a tool like tcpdump, and then the Center reassembles them and categorizes the content into data cubes (e.g., movies, video calls, emails) that are easily findable with open source search engines such as Lucene and Solr. Is this taking security seriously? You bet!
Enhanced Intrusions Detection Systems is how security is being taken seriously and with a cost to privacy. If a hosting provider like Akamai — which, runs 15% of the world’s web traffic — is able to reassemble your packets, and even archive them, is that what you want security to be?
Probably not, but when a security professional says "let's take security seriously", to them it means just watch all the things. I remember in 2006, after my IDS training, I was all for collecting every single packet (no matter what) because I was taught as a security professional, that was the only way I could do my job: watch everything, since we need to know everything about anything. If you watch everything, that’s inherently security, and the world is protected. That was my modus operandi for many years.
This is why taking privacy and security seriously really means taking compliance seriously. Compliance lassos in "watch everything" while also providing validation and proof of security. If we’re doing good hygiene (e.g. Key rotation, log review, Change Control) on our systems, there’s no reason to collect and watch everything.
The recipe to collect everything already exists but that means only security is being taken seriously. I hope we start to take compliance seriously and bring that into the privacy/security equation.
Option 1. Collect all the things.
Option 2. Just do good hygiene.
Aside from these two options, what else is available to provide a most reasonable approach to protecting our customer's sensitive data?
EDIT: I'm the head of a compliance agency
Your previous 2 comments seem to be drawing a very strange dichotomy where the only options for "security" are "capture all traffic" and "compliance". I'm not even sure where to begin in responding to that, because it's so far beyond any facts you provided in either update.
Neither compliance nor traffic capture are "security". Capturing and analyzing traffic can be a facet of a security stance, and structured compliance frameworks can provide structure and goalposts for measuring your security stance, but there's a near-infinite range of other factors at play here.
This video is proof: https://www.youtube.com/watch?v=R63CRBNLE2o
Other security researchers have gone so far as suggesting Penetration Testing and Risk Assessment are the most reasonable approaches to providing security for sensitive data.
Businesses have to architect for the world we have, yes, but still.
If something doesn't hit their bottom line, should a commercial company, who is predicated on providing shareholder value care about it?
Personally as distateful as it is, I think the only way to amke companies take proper account of externalities is regulation / law.
If you take the example of health and safety, or building regulations, there is a reason why those are (in many countries) enshrined in law and not done by companies spontaneously.
It just turns out that most people are actually perfectly okay with periodic security breaches as long as the hassle from it is substantially below the amortized savings from less hassle in their day-to-day usage.
So all of your data is protected according to that risk profile, adjusted to feedback as we get data on customer sensitivities.
You know, like a professional manages assets.
People love to hate companies for their security practices when a breach happens. It's much harder to form a dispassionate, nuanced view from a place of critical thinking.
Security is bloody difficult, even for lauded members of the community. Schneier apologises in his blog when he says he still uses Windows, for example.
"All of these companies have decided that the risk of losing this data was worth the cost savings of not protecting against the attack. So far they all seem to be right (not even Sony went out of business after their massive breach)."
"Went out of business" is not the right criteria for saying "they were right". The question is whether (the EV of) the cost of a breach exceeded (the EV of) the cost of protecting against a breech. It's probable that the delta between those two was not enough to put Sony out of business - they're a massive company with high margins.
If they don't have such a clause, then I don't think they take the security of their clients' data very seriously.
They should instead say that they take their business seriously, and don't care about the security of their clients' data as long as they don't walk away.
Security is always the very last thing they care about, until there's a huge very costly breach. Then they care for 2 months, and I get to actually work on the security stuff, and get other developers to cooperate, and clean up the known messes left all over in the typical mad dash of feature addition and replacement. Then it's all forgotten about again.
They should say "we suck, we focused 100% on features and market share, we know now what's important", and they should get security right. It does kinda suck that the market often rewards companies that prioritize all else above security, and I wish such companies all the damage a breach can cause. Otherwise, there's no reason to not suck at security.
They should just be honest: "This is what happens when you make a product people love. It's insecure and data is lost and service is interrupted. But you all love it so thanks :)". People should not be under the illusion that their favored products and services are secure. They should know they love insecure shit.
What most people don't realise when it comes to computer security is, the foundation on which our modern systems are built never anticipated this much growth.
I think I am happy with companies that care enough to come forward and admit their mistakes. IT security is hard, very very hard.
You really need a good security guy who can be the bad guy and stop projects in their tracks when it's clear there are security issues. Because asking the same people who are accountable for shipping to stop the presses to fix even the obvious shit you already know about is a challenge - much less investing resources in 'shoring up' against attacks you don't anticipate.
If there are people that truly do care, they should stand up in the early process and make sure enough budget is allocated.
> “We take security seriously”, otherwise known as “We didn’t take it seriously enough”
This implies that if only the companies that got breached had taken security more seriously, they wouldn't have gotten breached. In a world where databases are valuable (the AFF example cited in the post, for example), software is virtually impossible to get to zero defects, and where zero-day vulnerabilities are traded on the open market, some big fish are going to get popped.
The idea that getting breached means you're incompetent is toxic and needs to stop; it just means that you're a sufficiently high-value target. It's very possible (and quite likely) that many of those breached expended extensive efforts in defense. The idea that they would've been fine modulo more security expenditure not just a baseless assumption, it is in many cases patently false (granted, there's plenty where it's true). As a security professional working in customer-facing security, I'm helping exactly the people who are getting breached, therefore I say that having a monetary motive to say that they aren't spending enough efforts and should give me more money ;-)
The article also ignores that knowing that you got popped probably already means that you're in one of the higher percentiles of security posture... That's sad, but, again, blaming the victims here helps no-one.
(By the way, if you too would like to help people who get breached instead of making fun of them, we're hiring. Contact info in HN profile.)
Of course companies tell you that there were victims of high sophisticated APT attacks, otherwise they would have to admit, they had been compromised by simple script-kiddy attacks.
Of course blaming victims doesn't help anyone. I want to emphasize here, that not only the companies are victims but also the users and customers, who mostly never received compensations after security failures.
It's legal, it's consensual, it's something you and your partners enjoy--and yet, you really would prefer that it not be Googleable during a board meeting.
Considering how puritanical and shamefully we handle talking about sexuality in the workplace now (despite claiming to support tolerance and diversity more than ever) it hardly seems surprising folks may still want privacy.
Or that you've paid for sex in Reno? Or that you've done heroin in portugal? All of these are not illegal in their respective areas and they're all socially frowned upon.
Are you not capable of empathizing with other people who don't do exactly what you do? Or are you just being pigheaded?
Businesses, just practice INFOSEC instead of preaching it. Better that way.
It's quite easy to point a finger when you have a few servers that you spend all of your time securing and monitoring. It's something else when you have departments of people connected to your network and running services, breaking policies, taking laptops out of the office, etc.
Nobody wants to see their data being breached. I applaud companies for publicly sharing their investigation and response.
I'd argue many of the companies mentioned have invested heavily in security. Whether their investment will prevent a compromise from a determined adversary is likely unrelated to their investment.
Unfortunately, I suspect many of the mentioned companies had not equally invested in how to properly communicate a compromise with their customers.
* the technology adoption curve of attack techniques has made powerful attacks available to the "early majority" cohort of criminals
* computer science has demonstrated flaws in foundational technology that weren't widely known 15 years ago
* as online attacks have mainstreamed, an industry value chain has developed to monetize weaknesses
So it's business-as-usual in IT, despite the fact that when it comes to security, "usual" has been redefined.
Basically no medium/medium-large company in the world is willing to spend the amount of money (and make the usability sacrifices) it would take to reasonably ensure resiliency against attacks. The most secure firms spend anomalous amounts of money simply to elevate themselves to a point where they (a) aren't the easiest targets and (b) have bought enough time to detect and respond to incidents as they occur.
* Fine-grained segmentation of all networks on need-to-know basis, informed by the org chart and detailed role descriptions for virtually all employees.
* No employee access to arbitrary Internet sites
* For employees that require Internet access, air gaps between computers that can hit Google and computers that can access company email
* Formal audits for minor software releases
* Expensive, heavily tested secure coding training for all developers
* Adoption of secure coding/design standards ("this is the XYZcorp way to make an SQL query" and "this is the XYZcorp way to render HTML"). Strict bans on deprecated interfaces.
* Employee access to sensitive internal applications (like document and image management) gated through Citrix-like environments, so you have to remote terminal in to get to the browser that actually talks to the application.
* Extremely minimal access provided to VPN users.
* Total 8021x-style lockdown of network ports and fascist policies against bringing own devices.
And so on.
There is zero chance this is ever going to describe any huge company.
I've experienced this myself inside a major central bank. Most development staff were a) demoralised and b) frequently adopting insecure work-arounds.
About the closest I've come to seeing any of these bullets deployed reliably is Microsoft's developer training and deprecation of the standard C library, and that initiative was so out-of-the-ordinary that it was newsworthy, and widely reported.
But to actually earn that bullet, Microsoft would need to deploy those same measures across all its contractors, and, more importantly, deploy them on internal IT and line of business systems, not just the Windows and Office codebases.
Plus, the posts I read earlier didn't mention 'Fortune 500' or 'reliably' - and I'm glad you didn't use words like 'productive' and 'efficient'!
Paranoia in excess.
That may be a realistic proposal, but I'm skeptical that it's the best answer, maybe because 100% of my Citrix experiences have been awful. Wouldn't "ensure internal app adhere to all other points, rewriting them if necessary" be a better answer for "given infinite money" question?
You'd still have Citrix lockdown environments for any application that you can't rewrite. For instance, an F500 might license the web app it uses for image management for things like scanned medical records.
I should add: I don't think any of these are realistic proposals.
Either they need to sell my address to spammers to finance their business or they delusionally believe I want to read their spam.
zomg! muh freedoms!
That's not to say that user data should go unprotected; the majority of people in the company should have zero access to that, such that even if their systems were compromised user data would remain safe.
Swiss banks have a tradition of security overkill. Branches in nice areas have bulletproofing rarely seen in the US outside of very bad neighborhoods. It would be un-Swiss not to be prepared.
In that case every employee would have one or more security people standing behind them the whole day. No email would be opened without a phone call to the sender to confirm that it's not spear phishing.
But more realistically, loads and loads of 24/7 monitoring and internal auditing and testing. Attackers have to abuse something to get in, and if the company is internally doing the same thing with a large enough team, 99% is probably secure. Hackers will probably look for easier targets then. Still, this is quite unrealistic to happen in present times.
Which, as it happens, I've rather pointedly not been.
Can you list a few examples of such flaws?
* Exploitable memory corruption (really only began to be recognized in late 1995, and only became mainstream with modern heap exploits, perhaps 10 years later). Really, this trend meaningfully picks up speed with the dawn of the clientside exploit era, in which we no longer obsess about Sendmail vulnerabilities and start obsessing about browser vulnerabilities. In essence: the revelation that memory-unsafe languages are insecure in the presence of virtually any defect, not merely unsafe buffer copies on the stack.
* Side channels, not so much for crypto (crypto attacks are fun but rare, and certainly not the low-hanging fruit used to compromise most networks) but for weaponization of other flaws. Side channels are the other side of the "covert channel" coin, which coin basically describes "surreptitious unintended exfiltration of data from software". So here you're also talking about things like blind SQLI.
In-band signaling and the insecurity of using simple strings to encode program behavior would be a third major class of foundational flaws, but it isn't new; it was well-known in the late 1980s. But industry certainly hasn't adapted to eliminate it; witness, for instance, very widespread Java application frameworks that embed executable server-side scripting code in UI inputs!
There isn't just one alternative, and I don't think there really should be - the fact that most of them are interoperable means that not everyone has to be using the same software. We need to standardize on a protocol, not so much an implementation.
(But yes, there does need to be a nice-looking implementation if we want the general public to use it.)
I just don't have much faith in these idealistic open source projects these days; when I google "diaspora oauth" and the first thing I find is abandonware (https://github.com/diaspora/diaspora-client), it pretty much confirms my cynicism. "Nice-looking" isn't the only thing we need from identity management.
IndieAuth is interesting too: https://indieauth.com/
Saying that security is hard is a cop out. It's the cost of doing business. Most places that are hacked should have been doing more about security.
"The problem is innately difficult because from the
beginning (ENIAC, 1944), due to the high cost of
components, computers were built to share resources
(memory, processors, buses, etc.). If you look for a
one-word synopsis of computer design philosophy, it
was and is SHARING. In the security realm, the one
word synopsis is SEPARATION: keeping the bad
guys away from the good guys’ stuff!
So today, making a computer secure requires
imposing a “separation paradigm” on top of an
architecture built to share. That is tough! Even when
partially successful, the residual problem is going to
be covert channels. We really need to focus on
making a secure computer, not on making a computer
secure – the point of view changes your beginning
assumptions and requirements!"
I'll add that the fundamental constructs and operations of a computer are designed a dumb way that will follow the most self-defeating orders. Smarter architectures, like Burroughs B5500 and System/38, existed in the past that had CPU's that enforced fine-grained separation (helps secrecy), protected pointers/stacks/arrays from abuse (stops much code injection), and could spot many problems like interface abuse (80+% of issues) at runtime. The dumb systems were cheaper, faster, & backward-compatible with garbage tools in widespread use. So the market went with them. To this day, most "security professionals" have no clue that there existed hardware & software systems so resistant to compromise that NSA's red teams gave up on even watered-down versions of them. And then bought them for protecting most sensitive stuff while pushing weaker stuff on us for their other mission. ;)
Fortunately, there is a tiny, niche part of the security community working on such "high assurance" solutions (eg crash-safe.org) or at least better architectures w/ some higher assurance components (eg genode.org). Our niche is not popular, has few customers, and never will mainstream due to tough tradeoffs it forces. Yet, real security is gaining a bit more traction in academia and some software firms. One day such platforms might get more affordable and widely available so they you can use one without lost sleep due to someone opening an email. :)
I'm not in the know, but it seems pretty basic. Lemme know if I'm wrong.
Liability should first rest with the entities deploying the software, since those are the only ones with a picture of what can go wrong in the context where it's being deployed. Those entities should then be demanding warranty/indemnity as appropriate from their suppliers (or possibly third parties, for F/OSS).
Compare this attitude with the extremely conservative approach in rocketry. The technology is positively from the stone age, but everyone agrees that it is well understood. You can't take any other approach when space missions are multi-decade projects and the price tag has nine figures.
Even if your digital security model is impeccable, I'd wager a large share of these attacks involve a social engineering/disgruntled employee vector as well, which is vastly harder to protect against.
And even the people who do know, often don't get the time to do it correctly.